I'd say it's better to call it a unit of counting.
If I have a bin of apples, and I say it's 5 apples wide, and 4 apples tall, then you'd say I have 20 apples, not 20 apples squared.
It's common to specify a length by a count of items passed along that length. Eg, a city block is a ~square on the ground bounded by roads. Yet if you're traveling in a city, you might say "I walked 5 blocks." This is a linguistic shortcut, skipping implied information. If you're trying to talk about both in a unclear context, additional words to clarify are required to sufficiently convey the information, that's just how language words.
Exactly. Pixels are indivisible quanta, not units of any kind of distance. Saying pixel^2 makes as much sense as counting the number of atoms on the surface of a metal and calling it atoms^2.
Is it that, or is it a compound unit that has a defined width and height already? Something can be five football fields long by two football fields wide, for an area of ten football fields.
No, it is a count. Pixels can have different sizes and shapes, just like apples. Technically football fields vary slightly too but not close to as much as apples or pixels.
> This is an issue that strikes right at the root of correct image (sprite) computing and the ability to correctly integrate (converge) the discrete and the continuous. The little square model is simply incorrect. It harms. It gets in the way. If you find yourself thinking that a pixel is a little square, please read this paper.
> A pixel is a point sample. It exists only at a point. For a color picture, a pixel might actually contain three samples, one for each primary color contributing to the picture at the sampling point. We can still think of this as a point sample of a color. But we cannot think of a pixel as a square—or anything other than a point.
A pixel is simply not a point sample. A camera does not take point sample snapshots, it integrates lightfall over little rectangular areas. A modern display does not reconstruct an image the way a DAC reconstructs sounds, they render little rectangles of light, generally with visible XY edges.
The paper's claim applies at least somewhat sensibly to CRTs, but one mustn't imagine the voltage interpolation and shadow masking a CRT does corresponds meaningfully to how modern displays work... and even for CRTs it was never actually correct to claim that pixels were point samples.
It is pretty reasonable in the modern day to say that an idealized pixel is a little square. A lot of graphics operates under this simplifying assumption, and it works better than most things in practice.
A pixel is a dot. The size and shape of the dot is implementation-dependent.
The dot may be physically small, or physically large, and it may even be non-square (I used to work for a camera company that had non-square pixels in one of its earlier DSLRs, and Bayer-format sensors can be thought of as “non-square”), so saying a pixel is a certain size, as a general measure across implementations, doesn’t really make sense.
In iOS and MacOS, we use “display units,” which can be pixels, or groups of pixels. The ratio usually changes, from device to device.
A pixel is neither a unit of length nor area, it is like a byte, a unit of information.
Sometimes, it is used as a length or area, omitting a conversion constant, but we do it all the times, the article gives out the mass vs force as an example.
Also worth mentioning that pixels are not always square. For example, the once popular 320x200 resolution have pixels taller than they are wide.
But it does highlight that the common terminology is imperfect and breaks the regularity that scientists come to expect when working with physical units in calculations
Scientists and engineers dont actually expect much, they make a lot of mistakes, are not very rigorous, not demanding towards each others. It is common for Units to be wrong, context defined, socially dependent and even sometimes added together when the operator + hasn't been properly defined
Pixel, used as a unit of horizontal or vertical resolution, typically implies the resolution of the other axis as well, at least up to common aspect ratios. We used to say 640x480 or 1280x1024 – now we might say 1080p or 2.5K but what we mean is 1920x1080 and 2560x1440, so "pixel" does appear to be a measure of area. Except of course it's not – it's a unit of a dimensionless quantity that measures the amount of something, like the mole. Still, a "quadratic count" is in some sense a quantity distinct from "linear count", just like angles and solid angles are distinct even though both are dimensionless quantities.
The issue is muddied by the fact that what people mostly care about is either the linear pixel count or pixel pitch, the distance between two neighboring pixels (or perhaps rather its reciprocal, pixels per unit length). Further confounding is that technically, resolution is a measure of angular separation, and to convert pixel pitch to resolution you need to know the viewing distance.
Digital camera manufacturers at some point started using megapixels (around the point that sensor resolutions rose above 1 MP), presumably because big numbers are better marketing. Then there's the fact that camera screen and electronic viewfinder resolutions are given in subpixels, presumably again for marketing reasons.
Digital photography then takes us on to subpixels, Bayer filters (https://en.wikipedia.org/wiki/Color_filter_array) and so on. You can also divide the luminance colour parts out. Most image and video compression puts more emphasis on the luminance profile, getting the colour more approximate. The subpixels on a digital camera (or a display for that matter) take advantage of this quirk of human vision.
While it is certainly more common in the US we occasionally use blocks as a measurement here in Sweden too. Blocks are just smaller and less regular here.
I’m surprised the author didn’t dig into the fact that not all pixels are square. Or that pixels are made of underlying RGB light emitters. And that those RGB emitters are often very non-square. And often not 1:1 RGBEmitter-to-Pixel (stupid pentile).
A pixel is a sample or a collection of values of the Red, Green, and Blue components of light captured at a particular location in a typically rectangular area. Pixels have no physical dimensions. A camera sensor has no pixels, it has photosites (four colour sensitive elements per one rectangular area).
I'd say it's better to call it a unit of counting.
If I have a bin of apples, and I say it's 5 apples wide, and 4 apples tall, then you'd say I have 20 apples, not 20 apples squared.
It's common to specify a length by a count of items passed along that length. Eg, a city block is a ~square on the ground bounded by roads. Yet if you're traveling in a city, you might say "I walked 5 blocks." This is a linguistic shortcut, skipping implied information. If you're trying to talk about both in a unclear context, additional words to clarify are required to sufficiently convey the information, that's just how language words.
Exactly. Pixels are indivisible quanta, not units of any kind of distance. Saying pixel^2 makes as much sense as counting the number of atoms on the surface of a metal and calling it atoms^2.
Is it that, or is it a compound unit that has a defined width and height already? Something can be five football fields long by two football fields wide, for an area of ten football fields.
No, it is a count. Pixels can have different sizes and shapes, just like apples. Technically football fields vary slightly too but not close to as much as apples or pixels.
What's the standard size of a city block, the other countable example given by the original author?
Yes, city blocks are like pixels or apples. They do not have a standard size or shape.
> A Pixel Is Not A Little Square!
> This is an issue that strikes right at the root of correct image (sprite) computing and the ability to correctly integrate (converge) the discrete and the continuous. The little square model is simply incorrect. It harms. It gets in the way. If you find yourself thinking that a pixel is a little square, please read this paper.
> A pixel is a point sample. It exists only at a point. For a color picture, a pixel might actually contain three samples, one for each primary color contributing to the picture at the sampling point. We can still think of this as a point sample of a color. But we cannot think of a pixel as a square—or anything other than a point.
Alvy Ray Smith, 1995 http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf
A pixel is simply not a point sample. A camera does not take point sample snapshots, it integrates lightfall over little rectangular areas. A modern display does not reconstruct an image the way a DAC reconstructs sounds, they render little rectangles of light, generally with visible XY edges.
The paper's claim applies at least somewhat sensibly to CRTs, but one mustn't imagine the voltage interpolation and shadow masking a CRT does corresponds meaningfully to how modern displays work... and even for CRTs it was never actually correct to claim that pixels were point samples.
It is pretty reasonable in the modern day to say that an idealized pixel is a little square. A lot of graphics operates under this simplifying assumption, and it works better than most things in practice.
This is one of my favorite articles. Although I think you can define for yourself what your pixels are, for most it is a point sample.
A pixel is a dot. The size and shape of the dot is implementation-dependent.
The dot may be physically small, or physically large, and it may even be non-square (I used to work for a camera company that had non-square pixels in one of its earlier DSLRs, and Bayer-format sensors can be thought of as “non-square”), so saying a pixel is a certain size, as a general measure across implementations, doesn’t really make sense.
In iOS and MacOS, we use “display units,” which can be pixels, or groups of pixels. The ratio usually changes, from device to device.
This is my favourite single-pixel interface: https://en.wikipedia.org/wiki/Trafficators
They are sort of coming back, as side-mirror indicators.
This depends upon who you ask. CSS defines a pixel as an angle:
https://www.w3.org/TR/css-values-3/#reference-pixel
That's the definition of a "reference pixel", not a pixel. They actually refer to a pixel (and the angle) in the definition.
A pixel is neither a unit of length nor area, it is like a byte, a unit of information.
Sometimes, it is used as a length or area, omitting a conversion constant, but we do it all the times, the article gives out the mass vs force as an example.
Also worth mentioning that pixels are not always square. For example, the once popular 320x200 resolution have pixels taller than they are wide.
Pixel, used as a unit of horizontal or vertical resolution, typically implies the resolution of the other axis as well, at least up to common aspect ratios. We used to say 640x480 or 1280x1024 – now we might say 1080p or 2.5K but what we mean is 1920x1080 and 2560x1440, so "pixel" does appear to be a measure of area. Except of course it's not – it's a unit of a dimensionless quantity that measures the amount of something, like the mole. Still, a "quadratic count" is in some sense a quantity distinct from "linear count", just like angles and solid angles are distinct even though both are dimensionless quantities.
The issue is muddied by the fact that what people mostly care about is either the linear pixel count or pixel pitch, the distance between two neighboring pixels (or perhaps rather its reciprocal, pixels per unit length). Further confounding is that technically, resolution is a measure of angular separation, and to convert pixel pitch to resolution you need to know the viewing distance.
Digital camera manufacturers at some point started using megapixels (around the point that sensor resolutions rose above 1 MP), presumably because big numbers are better marketing. Then there's the fact that camera screen and electronic viewfinder resolutions are given in subpixels, presumably again for marketing reasons.
Digital photography then takes us on to subpixels, Bayer filters (https://en.wikipedia.org/wiki/Color_filter_array) and so on. You can also divide the luminance colour parts out. Most image and video compression puts more emphasis on the luminance profile, getting the colour more approximate. The subpixels on a digital camera (or a display for that matter) take advantage of this quirk of human vision.
Happens to all square shapes.
A chessboard is 8 tiles wide and 8 tiles long, so it consists of 64 tiles covering an area of, well, 64 tiles.
Not all pixels are square, though! Does anyone remember anamorphic DVDs? https://en.wikipedia.org/wiki/Anamorphic_widescreen
City blocks, too.
In the US...
While it is certainly more common in the US we occasionally use blocks as a measurement here in Sweden too. Blocks are just smaller and less regular here.
Reminds me of this Numberphile w/ Cliff Stoll [1]: The Nescafé Equation (43 coffee beans)
[1] https://youtu.be/3V84Bi-mzQM
A Pixel is a telephone.
This is a fun post by Nayuki - I'd never given this much thought, but this takes the premise and runs with it
I’m surprised the author didn’t dig into the fact that not all pixels are square. Or that pixels are made of underlying RGB light emitters. And that those RGB emitters are often very non-square. And often not 1:1 RGBEmitter-to-Pixel (stupid pentile).
> "Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte."
or
> "I have made this longer than usual because I have not had time to make it shorter."
A pixel is a sample or a collection of values of the Red, Green, and Blue components of light captured at a particular location in a typically rectangular area. Pixels have no physical dimensions. A camera sensor has no pixels, it has photosites (four colour sensitive elements per one rectangular area).
Is a pixel not a pixel when it's in a different color space? (HSV, XYZ etc?)
RGB is the most common colour space, but yes, other colour spaces are available.