Confusion over "screen resolution" causes headaches for users…

(Apologies in advance – this is a very long blog post, because of its detailed technical nature. I promise in due course to post a version in Word .doc format on my website).

Confusion over what’s commonly referred to as “screen resolution” causes headaches for computer users – literally. People all over the world are having a computing experience that’s nowhere near as good as it could or should be because their displays are wrongly set.

In a comment on my last post, my good online friend Richard Fink said:

“Resolution is a topic that’s dimly understood and can really use some detailed explaining. Just wrapping one’s head around the vocabulary is difficult. There is the native(?) resolution of the screen (the physical number of “points” the screen is capable of displaying) and then there is the virtual(?) number of points that the OS imposes by rounding out, right? Or wrong? And how does this all relate to pixels and pixel size?
I’ve done considerable googling on this, have ingested a White Paper on IE’s new Adaptive Zoom feature, and yet am still largely in the dark.

Will the new Bill Hill please don his mask and cape and shine some light on this?
I sure would appreciate it. Because when you say “resolution independent” I have no idea what that truly means, let alone what the ramifications are.
And I know I’m not alone.

No Richard, you are far from alone…

People normally refer to resolution in terms of the number of pixels horizontally and vertically on the screen; 800 x 600, for example. But what does 800 x 600 really mean?

The term “resolution”, as commonly used, is a misnomer. It’s only a pixel count, and tells you nothing about the actual resolution of that display.

Resolution ought to mean “The level of detail the screen can display”. It ought to be specified as 96 pixels per inch (ppi), or 133ppi, which easily translates to: “Able to resolve detail down to 1/96th of an inch, or 1/133rd of an inch”.

An 800 x 600 desktop display with a 17” screen would be pretty low-resolution – less than 60 pixels per inch. But an 800 x 600 mobile phone, with 3-inch display, would be really high resolution – 333ppi.

Usually, when you buy a display, the specification will almost never mention ppi; instead, manufacturers refer to dot-pitch, measured in millimeters. In my 133ppi example, the 1920×1200 on a 17-inch display, you would get a dot-pitch of .19 mm. ( But Apple is the exception – as usual – and markets the “high-resolution 133ppi screen” of the 17-inch MacBook Pro as a feature).

MacBook Pro 17″ model: Marketed with “high-resolution 133ppi display” (And it runs Vista like a dream, using BootCamp…)

In some contexts, manufacturers use the term megapixels, which would be either .7 MP for 1024 x 768 or 2.4 MP, if you include subpixels. It is more common to see megapixels referring to digital cameras – but it is sometimes used for displays.

Mostly, though, people quote only the “pixel count” – e.g. 1024 x 768 – which tells you nothing about the level of detail your screen can display, since it ignores screen size or pixel size. To know the resolution of your display, you need to know both the pixel count and one or other of those numbers.

This is not just semantics. Because these “pixel counts” have become standards, people use them all the time as measures of “resolution” – and that’s only true if you’re always comparing the same size of display.

This “resolution misnomer” affects millions of computer users. As a result of the confusion it has caused, only a minority of computer users are actually reaping the benefits of higher-resolution displays they bought. In most cases, they end up using them at lower resolution – inadvertently turning what should be a really great, crisp visual experience into an awful one. And they get more eye fatigue than they should.

How did this happen?

In the beginning was the CRT – the Cathode Ray Tube monitor, that hulking great brute of a box that used to sit on your desk.

You felt like a deer caught in the headlights of an oncoming semi – remember when you had to share your desk with one of these hulking brutes?


The CRT monitor began life as just another TV. It had no such thing as a “pixel”. The inside of the screen was covered with an amorphous layer of phosphors, which glowed when hit by a beam of electrons from a rapid-scanning “electron gun”. In order for the software running the computer to be able to address these phosphors, a “virtual” or “logical” pixel grid was created.

The “resolution” of this grid was dependent on:

  • The accuracy and tightness of the electron beam, and the granularity of the phosphors (usually determined by how much you were willing to pay for your display)
  • The number of “virtual pixels” your graphics card could manipulate per second (again, normally determined by cost).

Let’s be absolutely clear: In the CRT world, there was never such a thing as a true pixel. Even when Sony came out with its Trinitron aperture grill displays, which had Red, Green and Blue phosphor areas aligned horizontally side-by-side, these did not behave in exactly the same way as the pixels on today’s Liquid Crystal Displays (LCDs).

Since phosphors “glow” when hit by the beam, there is always some “bleeding” of light to adjacent phosphors. This could actually benefit the user, especially when displaying text. Think of it as free anti-aliasing. Also the analog signal often led to mis-targeting.

Some history can explain what happened…

In 1981, IBM introduced its first color graphics card, which it named CGA (Color Graphics Adapter) It could address 640×200 logical “pixels”, and the highest color depth supported was 4-bit (16 colors). To keep the “Television” aspect ratio of the screen (4:3), the pixels were elongated, or non-square.

In 1984, IBM introduced a new EGA (Enhanced Graphics Adapter), which produced a display of 16 simultaneous colors from a palette of 64, and addressed up to 640×350 logical “pixels”.

In 1987, with the IBM PS/2 line of computers, IBM introduced the Video Graphics Array (VGA), able to address 640×480 “pixels”. (I keep putting pixels in quotes because it’s important to keep remembering that there were no “real” pixels in these displays, only logical ones…)

These “logical pixel counts” became standards. Other manufacturers cloned them. As new graphics cards came out, the figures went up – to 800 x 600, then 1024 x 768, 1280 x 1024, and so on. (Recognize those numbers?) And those standard configurations became referred to as “resolutions”, even though the term was a misnomer.

In the meantime, a new class of displays had appeared using Liquid Crystal technology. At first, these LCDs were black-on-white (in reality, dark gray on lighter gray), but by 1988 they had reached VGA resolution (Compaq SLT 286), and 256-color screens by 1993 (PowerBook 165c), progressing quickly to millions of colors and high resolutions.

And that was where it all went wrong. Manufacturers built them to the same standard “resolutions” (really, pixel counts – 1024 x 768, and so on) – because they wanted them to run on existing graphics cards.

The real tragedy for users was that computer operating systems treated them exactly like the CRTs they were replacing – as if their pixels were still only logical (and thus changeable). But they weren’t…

Unlike CRTs, LCD displays have real, actual pixels. You can see them if you examine a display with a designer’s 10x magnifying loupe. Each physical pixel is normally made up of Red, Green and Blue sub-pixels, arranged side by side. With a loupe, you can also see there’s a black line between each sub-pixel element. That’s the wiring track in the display, and it creates a hard boundary between each sub-pixel.

For a good illustration, which also explains how we used these sub-pixels to create ClearType, go here:

The wiring track emits no light, so there’s little bleed between adjacent sub-pixels. And because these pixels exist in the hardware and are not merely logical, each LCD display has its own native resolution.

Native Resolution: The actual number of physical pixels in the hardware of a display. (Again, it’s a bit of misnomer, since it’s only a pixel count and tells you nothing about pixel size or screen size).

If you address your display at anything else but native resolution, though, you’re asking for trouble.

On a CRT, since the pixels are logical-only, you could change the “resolution” either in software, or by changing your graphics card. It didn’t matter – it was just another “virtual pixel grid” for the system to compute.

But if your LCD display is, say, 1920 x 1200 pixels (like the 17” display on this MacBook Pro), the number corresponds exactly to a grid of real physical dots hard-wired into the display. It’s inherent in the hardware, and it can’t be changed by software. There really are 1920 pixels across my screen, and 1200 down. Each pixel needs to be addressed exactly. The true resolution of this display is 1/133rd of an inch, because there are 133 pixels per inch.

However, when people launched their new high-resolution displays with Windows, for example, they often found that the icons, menus, etc were too small to read comfortably. Again, there’s a cost associated with this – it’s like having to strain to read the “small print” on a document.

Unfortunately, there’s a very easy – but very wrong – way to fix this. If you don’t know what you’re doing, you go into Display Settings, and change your display “resolution” from 1920 x 1200 pixels to, say, the recognizable old favorite of 1024 x 768.

Your icons and text get bigger. But it’s a usability disaster. Your display is no longer running at its native resolution. Instead of 133ppi, you’re getting only 70.93*ppi. Everything has to be scaled, because there’s no longer 1:1 mapping between the pixel addresses your software calculates, and the actual digital addresses of the physical pixels. That means lots of rounding and fudging of the numbers.

Look at the math involved. 70.93 is a repeater – a horrible number to deal with in this context. Instead of clean, integer-only calculations, now software calculations have to be rounded to the nearest whole pixel. Scaling errors appear all over the place. Bitmap graphics become pixellated. Unlike bitmap graphics, fonts are scalable. But Cleartype – which is dependent on exact sub-pixel addressing – breaks horribly. And so on.

You get the larger menus and icons you wanted – but the cost to your eyes is terrible.

There is a second way to fix this, which you sometimes see: create the lower-pixel-count version by using fewer pixels on the screen, leaving a black frame of unused pixels around the outside. However, people feel cheated with this technique, even though it avoids scaling problems. Arguably, this would be a better solution when reducing the display resolution to match a screen projector.

The overall cost of this misunderstanding about resolution is enormous. Only a minority of computer users with high-resolution displays actually run them at native resolution. And the problems this causes are a major reason why high-resolution displays have never taken off in the way they should have.

If you know what you’re doing, you can fix this properly. First you make sure your screen is running at its native resolution, then go into Display Properties (in Windows XP) or Personalize (in Vista) and use the DPI Scaling dialog to set the Scaling (for fonts) to the real ppi of your display – instead of changing the “resolution”.

DPI Scaling Dialog in Windows Vista


Seven years ago, I had a 22” desktop LCD display (IBM T221) that handled a true 204ppi. (3840 x 2400 pixels!!!!) In those days, I had to jump through many hoops to get it to work properly. You could get it to work after a lot of tweaking of things like icon sizes, spacing between icons etc., in the Display properties/Appearance dialog. But it was a lot more trouble than any regular user would wish to take.

The real problem, though, was insurmountable. In most software, websites, line-of-business applications and so on, dimensions of graphics, dialog boxes, text windows etc. had all been specified in pixels.

Windows, most of the applications which run on it – and the Web – were built on the assumption that all displays were ~96ppi. That was true for a while – but no longer. And there’s a threshold – I think it’s somewhere around 133ppi – where software and sites built on that assumption start to break, horribly…

A bitmap graphic which was created on a 96ppi display is less than a quarter of that size when viewed on a 204ppi display (half as big in each dimension). Even worse: If dialog boxes have been created using pixel dimensions, the area provided for text is only a quarter of the size but if the text scales properly it overflows, often clipping. It’s a real mess!

My good friend Chris Wilson on the Internet Explorer team created a workaround many years ago to run IE on really high-res displays. Instead of 96ppi, he created a switch which assumed 192ppi – and then every pixel was doubled. It wasn’t perfect, but it worked – and it still works today.

One bright spot in all of this mayhem was Microsoft Office. Starting with Office 2003, the team made Office resolution-independent. Word, PowerPoint, Excel, Publisher, Visio – all the Office applications have run like a dream on higher-resolution displays ever since. I have highlighted this in the past, but I’m glad to do it again, because I can never thank Office enough for grasping the nettle early, doing the right thing – and blazing a trail for others. If complex applications like these can be made resolution-independent, there’s really no excuse for anyone else.

Over recent years, Windows has been getting better about this (and I know Windows7 is another step in the improvements). At one time, Windows also assumed ~96ppi. Greg Hitchcock of the ClearType and Readability Research team at Microsoft wrote a great blog entry about this.

You’ll see from the DPI Scaling dialog that there’s now a 120dpi option as well. In addition, you can use the Custom Scaling option at the bottom of the dialog box to set Font DPI to the actual ppi of your display. But that can cause problems, too, and on this 133ppi display I’ve fallen back to 120dpi for Font Scaling.

So here’s what you do:

  • If you’re running an LCD display, make sure you set Windows to the native resolution of that display. Then use the Font Scaling dialog to set Font DPI to 120. There’s no point in setting your Pixel count higher than native, either, even if your graphics card supports it. You can’t create new hardware pixels using software. (Although, with ClearType, we found a way to use three times as many in the critical X-axis by addressing the RGB sub-pixels separately for text, instead of the whole pixel triad).
  • If you’re building software or a Website, make it resolution-independent by never using pixel dimensions. You can use percentages for dimensions like margins, point sizes for text (which the software will translate into the correct number of pixels if DPI Scaling is set correctly).
  • It will take a very long time for this to work its way into common practice. But it needs to happen, and you can make your applications and sites future-proof if you do it now.

I’d like to thank Greg Hitchcock and Richard Fink, who both proof-read this article, clarified some technical issues, and made helpful comments.

A Microsoft Word .doc version of this post is now available on my website.

Advertisements

6 thoughts on “Confusion over "screen resolution" causes headaches for users…

  1. Richard Fink

    Finally, I think I’ve got this fairly straight in my head. Now I understand why text set in a Cleartype-optimized font like Constantia looks laser-sharp on my Dell laptop wide-screen and a bit fuzzy on my 19″ screen with a 4:5 aspect-ratio. It’s the discrepancy between the native resolution of the screen and the logical resolution set within Windows.I’ll be linking to this one liberally, for sure.But – always one to push my luck – I’m hoping for a sequel that explains what is meant by “resolution independent”.

    Reply
  2. Bill Hill

    “Resolution-independent” is a critical topic. It must get solved on the Web going forward – otherwise we’re stuck in the Stone Age.Bitmap graphics and pixel dimensions make Websites the equivalent of Tablets of Stone. Can’t scale.This is a pretty big area to cover. There are general principles, and there’s probably a whole training course to be written on producing Future-Proof websites.I’ll think about it for a few days and see if I can come up with something. But that’s not a promise!

    Reply
  3. Richard Fink

    Aw, c’mon. Screen resolution was a softball I knew you could do off the top of your head.The dolphins won’t miss you for a day or two.And look, no sooner do you write a thing but it gets a rave review.

    Reply
  4. Anonymous

    Bill, you missunderstand the web. The CSS pixel ist not anymore the native pixel of the display but an idealised, standardised unit. Internet Explorer may still map CSS pixel to native pixel; advanced browser don’t.

    Reply
  5. Anonymous

    Bill, you missunderstand the web. The CSS pixel ist not anymore the native pixel of the display but an idealised, standardised unit. Internet Explorer may still map CSS pixel to native pixel; advanced browser don’t.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s