Monthly Archives: May 2009

How I (Almost) Became A Russian Porn Star… (from the Bill Hill Archives)

Scottish secret weapon unveiled in Moscow. No low-angle shots, please!

A funny thing happened on my way to the Kremlin…

In 2006, I was in Moscow for the very first annual conference of the World Editors’ Forum to be held in the former Soviet Union. My Microsoft colleague Mike Cooper and I had been invited to speak.

I really liked Moscow. But get ready for “culture shock”, if it’s your first visit.

It began at Domodedovo, one of Moscow’s two airports. After an overnight flight from Seattle to London, a four-hour layover in the Business Class lounge and a second flight to Moscow, I stumbled off the plane and through immigration.

Stepping out of the arrivals gate into the main terminal, my main thoughts were: “Get some Russian money”. “Get a taxi.” “Get to the hotel”. “Get some sleep”. Before I’d even left arrivals, I was mobbed by a bunch of guys who all looked like extras from a Hollywood movie about the Russian Mafiya (shaved head, three-day stubble, everyone dressed in black leather jackets…) and all shouting “Taxi?” “Taxi?”.

Hmmm – better not get a taxi until I can pay for it. Ah, there’s a sign I recognize – ATM. Brushing my way through the faux Mafiya, I plunk my case down in front of the machine, take out my Visa card, swipe it. “Enter Amount”. 350. “Dollars or roubles?” I’m sure I asked for $350 worth of roubles. What I got was a stack of roubles that would choke a horse. I later calculated it was worth about $1200 – way beyond my normal ATM limit for withdrawals. Quick – hide it from the taxi Mafiya!

OK. Now I can get a cab (Actually, now I can probably buy a cab…). Where’s the rank? I don’t want to use one of these casual airport guys if I can help it; I’d like an “official” taxi. If it was yellow, that would be reassuring.

There is no taxi rank at Domodedovo, as far as I can see – at least, there wasn’t when I was there. You follow the signs for Ground Transportation – thanking God for icons instead of text – and arrive outside the terminal. And you end up in an alley, made up of two high wooden walls, and – oh no! – lined from end to end with more Mafiya…

Speaking of icons, they’re a lifesaver when you’re in Russia. I can speak English, and studied French for five years. I can usually manage to work out what a sign means in any of the Latin-based languages, and find my way around. But it doesn’t work with Russian.

The fact I couldn’t understand the language, though, didn’t stop me from building two Russian screen fonts while I worked at Aldus, so we could have self-running products demos in the language. In gratitude, the Eastern European sales director brought me back a beautiful present from the Moscow Book Fair: The Life of Aldus Manutius – in Russian. It’s still one of my treasures…

Speaking of icons, they don’t always work. Imagine you’re a kilt-wearing Scotsman. Which restroom door do you pick?

Scotsmen | Men

Back at the airport, I give up and accept reality. There are no yellow cabs to be seen. So I brush past a couple of the heavier-looking guys and try to find one who’s smiling. “Taxi?” he says. “President’s Hotel”, say I. “OK, we go”.

Now this guy’s about nine feet tall and four feet wide at the shoulders. Grabs my wheeled carry-on bag and sets off at a fair clip. Better keep up. We race across one car park. Second car park. Third – ahhh! He throws my bag into the back of – you’ve guessed it – a Lada, and we’re off.

Turns out almost everyone in Moscow with a car moonlights as a cab driver. We drive for about 20 miles, first of all through birch forests, then past interminable rows of tired-looking apartment blocks. Every hundred yards or so, there’s a group of (mostly) men, gathered together. I see the bottles of beer and vodka. Ah – Glasgow on a Friday night, but without the pubs…

Moscow’s a city of contrasts; sitting in traffic with an ancient truck belching diesel fumes on one side, and a brand-new $350,000 Bentley limousine on the other…

Turns out the World Association of Newspapers has been very kind in its choice of hotel for speakers. The “President’s Hotel” is exactly that; it’s where Presidents stay when they visit Moscow…

Security’s so tight, the taxi driver can’t drop me at the lobby. There’s a barrier across the gate, with two armed policemen on duty. You show your passport and are admitted through a security turnstile. (That was to lead to another fun incident later).

I check in, and go up to my room. Except it’s not a room, it’s a suite, on one of the top floors, with the most staggering view of Moscow and the Moskva River. Right opposite my window – at about the same height – is the huge memorial to Peter The Great. Wow! Gilded onion-shaped domes glitter in the distance. We’re not in Kansas any more, Dorothy! Took me two days to discover that the door I thought led to a closet was in fact a second, guest lavatory…

The statue of Peter The Great, erected on its own “island” in the Moskva River – one of the tallest outdoor sculptures in the world. Peter strove for Russian maritime supremacy. My hotel in the background…

Next afternoon I have to go to the Kremlin for the opening ceremony of the conference. Then-President Vladimir Putin is giving the keynote speech. (During that same visit I get to meet Mikail Gorbachev. It’s been a long and interesting journey from the East end of Glasgow…).

I call up the cab company recommended in the conference notes. A cab will be at my hotel at 5pm.

As usual, I’m waiting outside the lobby ten minutes early. Takes me a few minutes to realize there’s absolutely no motor traffic going past, none of the normal to-ing and fro-ing you’d expect at a big hotel.

Oh, oh! Cabs aren’t allowed past the security barrier… So I walk out into the street. Sure enough, there’s a cab waiting. It’s even from the company I’d called. But it’s not my ride. In (very) broken English, the driver agrees to call the company; my car will be along in five minutes.

So I wait on the sidewalk, growing more and more anxious by the minute. I should mention, at this point, that I’m wearing a Scottish kilt – my standard business suit – and might as well have a sign around my neck saying: “Please Mug or Kidnap Me – I’m from Out of Town”.

About fifteen minutes later, one of the armed cops walks over and says – in great English – “What are you waiting for?” I explain I have to get to the Kremlin in under an hour. “I will get you a car,” he says.

He steps out into the street, holds up his hand and stops the first car he sees, opens the door and tells the driver: “Take this man to the Kremlin!”. To me, the cop says, “You give him 200 roubles.”

The driver, of course, looks like yet another Mafiya extra. Speaks not a word of English. And he clearly doesn’t know his way around the Moscow traffic system. We get lost. Three times he stops and asks policemen for directions.

He must have been panicked. I mean, you’re driving peacefully along the road, when you’re flagged down by an armed policeman, and told to to take a strangely-dressed man to the Kremlin – he was probably terrified by the thought that if he failed this mission, it was the Gulag, for sure…

Eventually we get to the huge line of security turnstiles guarding the entrance to the Kremlin. I find my way to the hall where the opening ceremony’s to take place. There are hundreds of people milling around.

I can’t see anyone I know (no real surprise). There’s an Indian couple – he in business suit, she in sari. I’ll talk to them, I thought. They’ll speak English for sure, and I always get along really well with Indian people.

Turns out there are hundreds of press photographers milling around, too…

A guy in a Scottish kilt, talking to an Indian lady in a sari? Talk about “The picture that says International Conference”. Suddenly, we’re the center of attention. Flashes start to go off all around.

Next morning, in the hall between conference sessions, I’m hailed by a booming Russian voice. “It’s you! You’re famous! You’re in all the Russian papers and on TV today!” “You’re like a porn star!”. “Can I get your business card?”

I think it was the knees. In my whole time in Moscow, mine were the only male knees I ever saw. I’m a Scot. I was born with a license to bare my knees whenever and wherever I want – that’s why God gave Scotsmen hairy legs.

The guy turned out to be the editor of a Russian news bureau. I waited months for the call to star in a Russian porn movie. But it never happened…

Mike Cooper and I managed to walk around Moscow a little. The Kremlin. Red Square. St. Basil’s Cathedral. The Moskva River. Amazing! And that’s when I met Boris.

St. Basil’s


Walking along by the Moskva River, I realized I was being followed – either by a stray dog, or a KGB agent with a
great disguise. He must have followed me for about three miles, stopping every time I did. Turns out there are quite a number of dogs living on the streets in Moscow – sleeping under cars, that sort of thing.

Boris and I walking along the banks of the Moskva River. You can see the outer wall of the Kremlin behind the river ferry. I am the only person wearing shorts in the whole of Moscow…

Now, I’m a sucker for dogs. If it had been Seattle, I’d probably have ended up taking Boris home. But there was no way I’d ever get him out of Russia, and I was sad to leave him behind…

Boris. He never told me his real name…

My overall impression of Russia at the time? Well I can’t really speak for the whole country, but Moscow was really jumping; it looked as if someone had exploded a giant capitalism bomb in the city center, and most people were trying to make a buck or two in the new reality. Don’t know what it’s like right now in the global recession – pretty rough for a lot of people, I would think.

I’ve never been attracted to Communism. However, as a Brit you have to be forever grateful for the huge sacrifices the Russian people made in World War II. Without them, it’s pretty likely Hitler would not have been stopped, and Britain would not have survived until the USA entered the war.

My father did his part to help Russia in wartime. He was a seaman, manning a “Hedgehog” depth-charge mortar on one of the Royal Navy’s anti-submarine destroyers, escorting convoys carrying fuel and ammunition to Murmansk – the dreaded Russian Convoy duty, so well captured in Alistair MacLean’s novel, “HMS Ulysses”, one of my Recommended Books on this blog.

Advertisements

eBook Publishers Learn A Lesson: The Markup Is Not The Book…

RocketBook: Less of a rocket, more a damp firecracker…

I joined the fledgling eBook team at Microsoft eleven years ago this month. When we began work on eBooks back in 1998, it was clear from the outset that no publisher wanted to sign up to supporting a single eBook format.

At that time, there were two eBook contenders with devices in the marketplace – RocketBook and SoftBook – both of which are now either gathering dust in the closets of “early adopters”, or taking up space in landfill somewhere…

SoftBook: Turned out to be just too soft…

Each used its own format. Microsoft Reader was a third. Soon, there were eBook readers from Adobe and a host of others. They proliferated like weeds.

Very understandably, no publisher wanted to bet its entire eBook future on a single format. It was a problem everyone recognized. Out of that grew a markup standard called Open eBook. Eventually, the Open eBook organization morphed into the International Digital Publishing Forum, and Open eBook became ePub.

It was clear to everyone that there would be different eBook formats for a long time to come – perhaps forever. The problem is that if the publisher wants any kind of Digital Rights Management (DRM) or protection, the raw markup somehow has to be wrapped in a secured software package.

In the case of Reader, Open eBook markup was converted to .Lit. If the book’s being sold in Adobe’s Digital Editions, then it’s wrapped by Adobe’s Content Server and served up as .epub.

Amazon has entered the eBook fray in a spectacular way with its Kindle, which uses its own .azw format (again, with digital rights protection).

Since Amazon has its own “closed” device, its DRM can be a lot more transparent to its customers than DRM which has to protect content in an open software environment like a Windows PC or a Macintosh. Both MS-Reader and Adobe Digital Editions require the reader to Activate the reader software they’re using.

Many people hate DRM, and suggest that publishers are trying to hang onto the past. “It hasn’t worked for music, or anything else”, goes the litany,”So why do publishers believe it will work for books?” These are often the same people who insist all old business models are dead, and just don’t know it yet, and that all content will eventually be free.

Personally, as a writer, I’m not ready just yet to give up on being paid for my work. I’m writing a book right now, and if it takes me a year, I’d like to hope I’ll be able to pay the mortgage at the end of it.

Unlike many of the litanists, I believe that publishers and their editors perform a vital function in improving the quality of material which gets published. They need to get real, though, accept that the removal of the requirement to print and distribute physical copies of books has driven publishing costs down dramatically, and re-work their business models accordingly.

eBooks should be a lot cheaper, and could be a lot cheaper, without harming publishers’ or writers’ incomes. But a world of nothing but free content is like free cable TV – 500 Channels, and you can spend hours searching it to find there’s nothing you want to watch.

However, the issue here isn’t the different ways of wrapping standard markup. What happens to it when it gets rendered?

It’s exactly the same problem that I wrote about yesterday on this blog in relation to the Web. I created a page which used absolutely Web-standards HTML markup, and a standard CSS3 stylesheet – both verified as such by the WorldWideWeb Consortium’s Validation tools.

Yet the final rendering worked the way I wanted it to on only one Web browser. On three others it broke. One just made my page slightly ugly – the others hit it with a truck.

And there’s the issue for eBook publishers. Even though they all standardize on ePub as their markup, what happens to it when the reader sees it is out of their control.

I’m not talking here about the reader’s “personal comfort” decisions – like making the text larger, for instance. Readers have to be able to do that.

I’m talking about what happens at a lower level, in the rendering engine and its text and page composition engine.

Take Microsoft Reader, for example. At its heart is a text composition engine called Page and Table Services – the result of hundreds of man-years of effort by one of the best teams in the company (I can say that – I never worked in that team).

Microsoft Reader – still a pretty comfortable read…

At the heart of Adobe’s Digital Editions is that company’s terrific text composition expertise. Adobe (and Aldus, which it acquired in 1994) has been doing this for decades, for professional printers and publishers with the highest possible standards.

Both will compose to their own metrics.

Huckleberry Finn in PDF: Nice! (But you can’t change the text size)

In Adobe’s Digital Editions, I looked at two free ePub books, and one free book in Adobe’s PDF format. Huckleberry Finn, in PDF, was beautifully set; nice line-length, great word and line-spacing, hyphenation and justification. Just about perfect. The two ePubs, though, were a bit rough – justification but no hyphenation, and lots of other problems…

ePub in Digital Editions: Not such a good read…

Amazon clearly has its own engine in Kindle. It’s not bad – but it does do weird stuff that it shouldn’t, and doesn’t do other things that it should. It’s readable, but could be improved.

(Disclosure here: I have to earn a living outside Microsoft now, and there’s a limit to how much free advice I’m going to give…)

I’m clearly not the only one who’s spotted this problem. On the IDPF’s own homepage is a Open Letter from the Association of American Publishers, which says:
“We encourage the IDPF to provide support to facilitate implementation industry-wide. We recognize that a number of issues remain, and we encourage the IDPF to work with its member organizations to develop guidelines/plans for addressing:
  • Quality assurance of any other formats which are created based on the EPUB version
  • Conversion to .LIT and eReader
  • How to handle books that do not have reflowable text and are not appropriate for EPUB”

Well over a decade on the road to eBooks, and we’re clearly not there yet…

Web-Standards Markup Certainly Won’t Give You Cross-Browser Sites at higher resolution…

How I want my site to look: Web-Standards HTML 4.01 and CSS3 in Internet Explorer 8. Click on any of the screenshots in this post for a larger view. They’re saved as JPEGs.


I hope readers of this blog who struggled all the way through the long post on screen resolution now understand why I’m so keen on being able to create paginated, multi-column content on the Web which people can read with as few distractions as possible.

If you read all of that post, you did a lot of scrolling in order to read the single-column layout. I hope – like me – you grew to resent all of the unused screen real estate either side of that single column.

Single-column layout can get tiring, and make the reader want to go elsewhere. But the Web shouldn’t be a place only for those with Attention Deficit. It should be a place where people can publish any kind of content, and where people who want to spend time reading that content should be able to do so as comfortably as possible.

If you prefer to bounce around the Web like a gadfly, spending a little time at each of a large number of different sites, that’s fine. But the Web shouldn’t force you to behave that way. It ought to be possible to take the time to absorb information, too – and feel comfortable while doing so.

I’ve been doing another set of experiments on my website, and produced a set of screenshots to illustrate what I’m trying to do, and some of the problems I’ve hit. Some of those problems arise because I’m trying to live in a “high-resolution world”, and doing both my website authoring and viewing on my new MacBook Pro laptop at 133ppi (pixels per inch).

The more I use this machine, the more I realize the future I want could be a lot closer than I thought. Let me explain…

The resolution of human vision is about 600 dpi (dots per inch). So theoretically we need 600ppi screens to make what we see on a computer as “real” as the natural world.

The problem is that to go from the ~100ppi of the average computer screen of today to 600ppi means computing 36 times as many pixels per second (n-squared, because you increase the resolution in both directions) That’s a huge mountain to climb for graphics cards, the batteries which run them – and the heat they generate.

That’s not necessary. The glossy magazines you buy are normally printed at no more than 185 lines per inch. And with ClearType – which is a resolution multiplier for text, since it has 3x the number of sub-pixels with which to work – you have gone past the equivalent of 185ppi on this 133ppi display.

It’s more than that, though. Not all high-resolution displays are created equal. I had a 133ppi Dell laptop about 12 years ago, and a 147ppi Dell laptop ten years ago. But I have to say, their displays were not in the same class as this 133ppi display from Apple. It’s stunningly bright and crisp. It seems a lot brighter running Vista than MacOSX – which probably means I’d be voraciously devouring battery charge when running without mains power. But I don’t really do that very often. And I like the brightness. Makes OSX look positively dingy…

It’s not only ClearType that looks good. I went on to Adobe’s site to try out their new DF4 rendering, and found it very crisp and readable on this screen.

It might be that 133ppi on screens of this quality is quite enough resolution – at least until there’s some kind of graphics technology breakthrough.

When it comes to Web authoring and surfing, email, FaceBook, LinkedIn, news sites etc., I still prefer the Windows versions of all the browsers.

Now I’m no longer at Microsoft, I’ve been deliberately using FireFox as my default browser to make sure there’s no lingering “you always prefer what you’re used to” bias. I also have the latest versions or betas of Chrome and Safari, and of course the shipping version of Internet Explorer 8.

I’m using only Web-standards markup. Everything gets checked using the W3C’s HTML and CSS validation tools.

Since this Mac screen is so bright, I found the white backgrounds I’d been using previously on my site to be punching out too much light, even with multicolumn text layout. So I switched to white text on a black background – same as on this blog. However, even the white text on that seemed too bright. So I ended up with light gray on black. It reads very well on this display, and feels very restful for the eyes. I’d love to get your feedback on how it looks on your screen. Here’s the link to the test page…

I included the Internet Explorer 8 screen shot at the beginning of this post because that’s exactly how I wanted my site to look. Authoring in Notepad++ gave me results in Internet Explorer which were entirely predictable. Unfortunately, the same was not true in any of the other browsers, even with absolutely Web-standards markup.

I’m not certain whether the problem is the way different browsers deal with the markup – it really shouldn’t be – or whether it relates to the high-resolution display and the way they cope with that.

In IE8, I found the 12pt text I’d been using on my site was too small at this resolution. I took it up to 17point, and changed all other text sizes to fit. I was really pleased with how the final page turned out.

Then I fired up the other browsers to check it.

FireFox3

Firefox – not too dreadful, but a few glitches…

  1. Badly positioned paragraph
  2. Navigation buttons have floated to wrong position
  3. Photo not scaled up to fit column
  4. One-line caption becomes two lines, changing column line-break
  5. This is an issue I’ve seen in Firefox for ages, and one they should fix. When you go FullScreen (F11, or Fn-F11 on the Mac) it forgets to repaint the new area of the screen created at the bottom, leaving garbage or old lines of text from the previous rendering.

Google Chrome on Windows Vista

Quite a mess! Lot of content you have scroll to see, since it no longer fits on a single “page”.

  1. Headings become too big. Masthead sticks out too far; “SECRET BLOGOZINE” heading becomes two lines, busting the layout
  2. Paragraph sets too big, overflowing space provided
  3. Picture is scaled up too big, sticking out into column 2
  4. Pull quote takes up too many lines; so does all the body text, forcing scrolling to read all content.
  5. No Full Screen switch. I thought the latest beta had one – but it seems to have disappeared. So you end up having to scroll.

Safari4 on Windows Vista

Where did the graphics go? (picture and navigation buttons have all disappeared) Again, you have to scroll to see all the content. Many issues seem similar to Chrome – probably because both use Webkit.

  1. No Full Screen mode
  2. Headings break
  3. Text overflows
  4. Picture didn’t show up at all
  5. Navigation buttons got lost
  6. One-line caption becomes two lines
  7. Pull quote takes up too many lines
I have a feeling that the problems with Chrome and Safari are more to do with not handling higher resolutions very well. The issues look very like those I used to hit all the time when switching to higher-resolution devices. I tried using their Zoom capabilities, but that didn’t solve the problems either.

Now I’m still quite the newbie here. I’m sure many of you old and grizzled Web designers are shaking your heads, and saying, “Well, he should (or shouldn’t) have done X or Y, and that wouldn’t have happened”.

But why should anyone wanting to create Web content have to learn the secret handshakes? Shouldn’t Web authoring be easy and open to everyone? Shouldn’t it be democratic – and not controlled by some “priesthood” who understand the “Latin”?

Shouldn’t Web-standards markup be the lingua franca? So that if you can speak it, you can talk to anyone, anywhere?

I know I’m still using pixel dimensions on my site. And that means I’m not yet resolution-independent. But shouldn’t Web-standards markup at least mean all the browsers handle the same content the same way?

Unless we can get to that point, I don’t see a lot of hope for Web standards. If I wanted to create cross-browser, multi-column, paginated content today, I’d be forced to use Adobe’s proprietary FLEX and AIR formats and their browser plugins.

Confusion over "screen resolution" causes headaches for users…

(Apologies in advance – this is a very long blog post, because of its detailed technical nature. I promise in due course to post a version in Word .doc format on my website).

Confusion over what’s commonly referred to as “screen resolution” causes headaches for computer users – literally. People all over the world are having a computing experience that’s nowhere near as good as it could or should be because their displays are wrongly set.

In a comment on my last post, my good online friend Richard Fink said:

“Resolution is a topic that’s dimly understood and can really use some detailed explaining. Just wrapping one’s head around the vocabulary is difficult. There is the native(?) resolution of the screen (the physical number of “points” the screen is capable of displaying) and then there is the virtual(?) number of points that the OS imposes by rounding out, right? Or wrong? And how does this all relate to pixels and pixel size?
I’ve done considerable googling on this, have ingested a White Paper on IE’s new Adaptive Zoom feature, and yet am still largely in the dark.

Will the new Bill Hill please don his mask and cape and shine some light on this?
I sure would appreciate it. Because when you say “resolution independent” I have no idea what that truly means, let alone what the ramifications are.
And I know I’m not alone.

No Richard, you are far from alone…

People normally refer to resolution in terms of the number of pixels horizontally and vertically on the screen; 800 x 600, for example. But what does 800 x 600 really mean?

The term “resolution”, as commonly used, is a misnomer. It’s only a pixel count, and tells you nothing about the actual resolution of that display.

Resolution ought to mean “The level of detail the screen can display”. It ought to be specified as 96 pixels per inch (ppi), or 133ppi, which easily translates to: “Able to resolve detail down to 1/96th of an inch, or 1/133rd of an inch”.

An 800 x 600 desktop display with a 17” screen would be pretty low-resolution – less than 60 pixels per inch. But an 800 x 600 mobile phone, with 3-inch display, would be really high resolution – 333ppi.

Usually, when you buy a display, the specification will almost never mention ppi; instead, manufacturers refer to dot-pitch, measured in millimeters. In my 133ppi example, the 1920×1200 on a 17-inch display, you would get a dot-pitch of .19 mm. ( But Apple is the exception – as usual – and markets the “high-resolution 133ppi screen” of the 17-inch MacBook Pro as a feature).

MacBook Pro 17″ model: Marketed with “high-resolution 133ppi display” (And it runs Vista like a dream, using BootCamp…)

In some contexts, manufacturers use the term megapixels, which would be either .7 MP for 1024 x 768 or 2.4 MP, if you include subpixels. It is more common to see megapixels referring to digital cameras – but it is sometimes used for displays.

Mostly, though, people quote only the “pixel count” – e.g. 1024 x 768 – which tells you nothing about the level of detail your screen can display, since it ignores screen size or pixel size. To know the resolution of your display, you need to know both the pixel count and one or other of those numbers.

This is not just semantics. Because these “pixel counts” have become standards, people use them all the time as measures of “resolution” – and that’s only true if you’re always comparing the same size of display.

This “resolution misnomer” affects millions of computer users. As a result of the confusion it has caused, only a minority of computer users are actually reaping the benefits of higher-resolution displays they bought. In most cases, they end up using them at lower resolution – inadvertently turning what should be a really great, crisp visual experience into an awful one. And they get more eye fatigue than they should.

How did this happen?

In the beginning was the CRT – the Cathode Ray Tube monitor, that hulking great brute of a box that used to sit on your desk.

You felt like a deer caught in the headlights of an oncoming semi – remember when you had to share your desk with one of these hulking brutes?


The CRT monitor began life as just another TV. It had no such thing as a “pixel”. The inside of the screen was covered with an amorphous layer of phosphors, which glowed when hit by a beam of electrons from a rapid-scanning “electron gun”. In order for the software running the computer to be able to address these phosphors, a “virtual” or “logical” pixel grid was created.

The “resolution” of this grid was dependent on:

  • The accuracy and tightness of the electron beam, and the granularity of the phosphors (usually determined by how much you were willing to pay for your display)
  • The number of “virtual pixels” your graphics card could manipulate per second (again, normally determined by cost).

Let’s be absolutely clear: In the CRT world, there was never such a thing as a true pixel. Even when Sony came out with its Trinitron aperture grill displays, which had Red, Green and Blue phosphor areas aligned horizontally side-by-side, these did not behave in exactly the same way as the pixels on today’s Liquid Crystal Displays (LCDs).

Since phosphors “glow” when hit by the beam, there is always some “bleeding” of light to adjacent phosphors. This could actually benefit the user, especially when displaying text. Think of it as free anti-aliasing. Also the analog signal often led to mis-targeting.

Some history can explain what happened…

In 1981, IBM introduced its first color graphics card, which it named CGA (Color Graphics Adapter) It could address 640×200 logical “pixels”, and the highest color depth supported was 4-bit (16 colors). To keep the “Television” aspect ratio of the screen (4:3), the pixels were elongated, or non-square.

In 1984, IBM introduced a new EGA (Enhanced Graphics Adapter), which produced a display of 16 simultaneous colors from a palette of 64, and addressed up to 640×350 logical “pixels”.

In 1987, with the IBM PS/2 line of computers, IBM introduced the Video Graphics Array (VGA), able to address 640×480 “pixels”. (I keep putting pixels in quotes because it’s important to keep remembering that there were no “real” pixels in these displays, only logical ones…)

These “logical pixel counts” became standards. Other manufacturers cloned them. As new graphics cards came out, the figures went up – to 800 x 600, then 1024 x 768, 1280 x 1024, and so on. (Recognize those numbers?) And those standard configurations became referred to as “resolutions”, even though the term was a misnomer.

In the meantime, a new class of displays had appeared using Liquid Crystal technology. At first, these LCDs were black-on-white (in reality, dark gray on lighter gray), but by 1988 they had reached VGA resolution (Compaq SLT 286), and 256-color screens by 1993 (PowerBook 165c), progressing quickly to millions of colors and high resolutions.

And that was where it all went wrong. Manufacturers built them to the same standard “resolutions” (really, pixel counts – 1024 x 768, and so on) – because they wanted them to run on existing graphics cards.

The real tragedy for users was that computer operating systems treated them exactly like the CRTs they were replacing – as if their pixels were still only logical (and thus changeable). But they weren’t…

Unlike CRTs, LCD displays have real, actual pixels. You can see them if you examine a display with a designer’s 10x magnifying loupe. Each physical pixel is normally made up of Red, Green and Blue sub-pixels, arranged side by side. With a loupe, you can also see there’s a black line between each sub-pixel element. That’s the wiring track in the display, and it creates a hard boundary between each sub-pixel.

For a good illustration, which also explains how we used these sub-pixels to create ClearType, go here:

The wiring track emits no light, so there’s little bleed between adjacent sub-pixels. And because these pixels exist in the hardware and are not merely logical, each LCD display has its own native resolution.

Native Resolution: The actual number of physical pixels in the hardware of a display. (Again, it’s a bit of misnomer, since it’s only a pixel count and tells you nothing about pixel size or screen size).

If you address your display at anything else but native resolution, though, you’re asking for trouble.

On a CRT, since the pixels are logical-only, you could change the “resolution” either in software, or by changing your graphics card. It didn’t matter – it was just another “virtual pixel grid” for the system to compute.

But if your LCD display is, say, 1920 x 1200 pixels (like the 17” display on this MacBook Pro), the number corresponds exactly to a grid of real physical dots hard-wired into the display. It’s inherent in the hardware, and it can’t be changed by software. There really are 1920 pixels across my screen, and 1200 down. Each pixel needs to be addressed exactly. The true resolution of this display is 1/133rd of an inch, because there are 133 pixels per inch.

However, when people launched their new high-resolution displays with Windows, for example, they often found that the icons, menus, etc were too small to read comfortably. Again, there’s a cost associated with this – it’s like having to strain to read the “small print” on a document.

Unfortunately, there’s a very easy – but very wrong – way to fix this. If you don’t know what you’re doing, you go into Display Settings, and change your display “resolution” from 1920 x 1200 pixels to, say, the recognizable old favorite of 1024 x 768.

Your icons and text get bigger. But it’s a usability disaster. Your display is no longer running at its native resolution. Instead of 133ppi, you’re getting only 70.93*ppi. Everything has to be scaled, because there’s no longer 1:1 mapping between the pixel addresses your software calculates, and the actual digital addresses of the physical pixels. That means lots of rounding and fudging of the numbers.

Look at the math involved. 70.93 is a repeater – a horrible number to deal with in this context. Instead of clean, integer-only calculations, now software calculations have to be rounded to the nearest whole pixel. Scaling errors appear all over the place. Bitmap graphics become pixellated. Unlike bitmap graphics, fonts are scalable. But Cleartype – which is dependent on exact sub-pixel addressing – breaks horribly. And so on.

You get the larger menus and icons you wanted – but the cost to your eyes is terrible.

There is a second way to fix this, which you sometimes see: create the lower-pixel-count version by using fewer pixels on the screen, leaving a black frame of unused pixels around the outside. However, people feel cheated with this technique, even though it avoids scaling problems. Arguably, this would be a better solution when reducing the display resolution to match a screen projector.

The overall cost of this misunderstanding about resolution is enormous. Only a minority of computer users with high-resolution displays actually run them at native resolution. And the problems this causes are a major reason why high-resolution displays have never taken off in the way they should have.

If you know what you’re doing, you can fix this properly. First you make sure your screen is running at its native resolution, then go into Display Properties (in Windows XP) or Personalize (in Vista) and use the DPI Scaling dialog to set the Scaling (for fonts) to the real ppi of your display – instead of changing the “resolution”.

DPI Scaling Dialog in Windows Vista


Seven years ago, I had a 22” desktop LCD display (IBM T221) that handled a true 204ppi. (3840 x 2400 pixels!!!!) In those days, I had to jump through many hoops to get it to work properly. You could get it to work after a lot of tweaking of things like icon sizes, spacing between icons etc., in the Display properties/Appearance dialog. But it was a lot more trouble than any regular user would wish to take.

The real problem, though, was insurmountable. In most software, websites, line-of-business applications and so on, dimensions of graphics, dialog boxes, text windows etc. had all been specified in pixels.

Windows, most of the applications which run on it – and the Web – were built on the assumption that all displays were ~96ppi. That was true for a while – but no longer. And there’s a threshold – I think it’s somewhere around 133ppi – where software and sites built on that assumption start to break, horribly…

A bitmap graphic which was created on a 96ppi display is less than a quarter of that size when viewed on a 204ppi display (half as big in each dimension). Even worse: If dialog boxes have been created using pixel dimensions, the area provided for text is only a quarter of the size but if the text scales properly it overflows, often clipping. It’s a real mess!

My good friend Chris Wilson on the Internet Explorer team created a workaround many years ago to run IE on really high-res displays. Instead of 96ppi, he created a switch which assumed 192ppi – and then every pixel was doubled. It wasn’t perfect, but it worked – and it still works today.

One bright spot in all of this mayhem was Microsoft Office. Starting with Office 2003, the team made Office resolution-independent. Word, PowerPoint, Excel, Publisher, Visio – all the Office applications have run like a dream on higher-resolution displays ever since. I have highlighted this in the past, but I’m glad to do it again, because I can never thank Office enough for grasping the nettle early, doing the right thing – and blazing a trail for others. If complex applications like these can be made resolution-independent, there’s really no excuse for anyone else.

Over recent years, Windows has been getting better about this (and I know Windows7 is another step in the improvements). At one time, Windows also assumed ~96ppi. Greg Hitchcock of the ClearType and Readability Research team at Microsoft wrote a great blog entry about this.

You’ll see from the DPI Scaling dialog that there’s now a 120dpi option as well. In addition, you can use the Custom Scaling option at the bottom of the dialog box to set Font DPI to the actual ppi of your display. But that can cause problems, too, and on this 133ppi display I’ve fallen back to 120dpi for Font Scaling.

So here’s what you do:

  • If you’re running an LCD display, make sure you set Windows to the native resolution of that display. Then use the Font Scaling dialog to set Font DPI to 120. There’s no point in setting your Pixel count higher than native, either, even if your graphics card supports it. You can’t create new hardware pixels using software. (Although, with ClearType, we found a way to use three times as many in the critical X-axis by addressing the RGB sub-pixels separately for text, instead of the whole pixel triad).
  • If you’re building software or a Website, make it resolution-independent by never using pixel dimensions. You can use percentages for dimensions like margins, point sizes for text (which the software will translate into the correct number of pixels if DPI Scaling is set correctly).
  • It will take a very long time for this to work its way into common practice. But it needs to happen, and you can make your applications and sites future-proof if you do it now.

I’d like to thank Greg Hitchcock and Richard Fink, who both proof-read this article, clarified some technical issues, and made helpful comments.

A Microsoft Word .doc version of this post is now available on my website.

Solar-powered computing feels so good: Next – the solar car…

Before the beginning of last winter, my wife Tanya and I decided to take the plunge and go solar in our home on the Hawaiian island of Kauai. It seems crazy that on a group of islands with this much sunshine and wind, 99% of the electricity used is generated from fossil fuels – which of course have to be imported.

We began by focusing on the largest use of power for most people on the islands – hot water. We had a new solar hot water tank installed, and two solar hot water panels put on the roof. It quickly became obvious that – except perhaps once or twice a year, in winter, if there were two or three weeks of constant rain (it does happen!) – we’d never again have to use electricity to heat our water.

Now, at this point you may be saying, “It’s OK for you – you’re in Hawaii!”. But the first solar hot water system we ever installed was in our home in Scotland in around 1983 – at the same latitude as Alaska. In summer, we got all the hot water we needed free. And if there was any sun on a winter’s day (it might have been minus six degrees centigrade at night) the system would still provide about ten degrees of heat. And systems have almost certainly improved since then…

Next stage was to tackle power itself. We found an experienced contractor (who became a friend). He came out with a meter which measured the amount of sunlight we’d be able to harness, and the calculations began.

Most people installing photovoltaic (PV) generation in their homes in Hawaii opt for “Net Metering”. Instead of a large bank of batteries, you use the utility company’s power grid as your battery. When you’re producing more than you’re using, your meter winds backwards. When you’re not – for example, at night – you take power from the grid.

Kauai Island Utility Co-operative (KIUC) is our local utility, and they’re a bit behind the curve. Uncertain as to the overall effect of net metering on their system, they had placed a cap on the percentage of customers who could be on Net Metering.

In any event, we didn’t want to instal a PV system and still be dependent on the grid. On an island like this, where the weather can change quickly, we used to have frequent but short power outages. I think that KIUC switches off power when there’s a thunderstorm passing over the island. It sounds inconvenient, but it does mean that you seldom see a blown transformer – which often happens in thunderstorms on the mainland, and takes much longer to fix.

So we opted for 24 PV panels, and a bank of 16 batteries – enough battery storage to last us for a couple of days without top-up. An Outback controller, and we were set.

We’ve been running the system for several months now. In all that time, we’ve used grid power for less than two hours.

Here’s a typical day. We wake up after running on batteries all night, and the indicator panel in the house tells us we have around 70-75% of power remaining. By around 8.30am (at the moment – perhaps 10am in winter), the panels kick in, and the meter tells us we’re charging again.

By around noon, the system is fully charged once more, and we continue to generate excess power for the remainder of the day. Great time to do a couple of loads of washing in the machine, or run any power tools. That’s also when the house begins to get hot and we need to run a fan. Doesn’t matter what we use – it’s all free power at this point!

The panels generate 3KwH. That may seem like we way over-engineered the system – but there’s method in our madness. Once practical plug-in electric cars hit the market, we intend to buy one – then we’ll never pay for fuel again. We don’t drive a lot of miles anyway. The longest trip is probably from here on the North Shore to Lihue, the largest town on the island – less than 50 miles, round-trip. The system has the spare capacity to keep an electric car fully charged.

We’re not power hogs. We run a couple of computers almost all day, a refrigerator, and not much else. We cook with propane. We watch DVDs at night (can’t stand network or cable TV).

It wasn’t cheap to install the system, and it will take a long time before we ever recover the installation cost. But money’s not everything. It feels great that we’ve reduced our carbon footprint this much, and that we’ve done something to reduce our dependency on fossil fuels – and will do more.

I received an electricity bill for one of the worst months of the winter. It rained a lot last winter, and we’d had to switch on grid power for about an hour and a half. The bill said it all:

Power usage for the same period last year: 710KwH
Power usage this year: 16KwH

Wow! The State of Hawaii should be encouraging this even more! (There are already tax incentives). KIUC should be encouraging it too – because it would save us all a fortune (They recently built a new fossil-fuel generating plant to add capacity).

And the best part of all. There may have been power outages this winter – but we’ve never noticed. The batteries just keep on going…

Text Rendering on MacBook Pro running Internet Explorer 8 on Vista

This is a screen grab of text rendered on my new MacBook Pro laptop, running Internet Explorer 8 on Microsoft Windows Vista (in a BootCamp partition).

My FB friend Alessandro Segalini asked me to post a screen shot. I originally saved this as a JPEG, then when I saw it in FB – where I posted it as a photo – I realized that the JPEG rendering was doing something funky to the color.

If you know how ClearType works, by manipulating the Red Green and Blue sub-pixel elements, you’ll know that any color manipulation can do weird things.

The text on FB still looks good, but I didn’t want the color weirdness (it’s only slight, but I can see it’s there) to give the wrong impression. So I’ve imported a 24-bit BMP this time. However, a thought just occurs to me: Blogger may scale this BMP to fit into its narrow column. If it does, then you won’t see the real thing here either. But if you click on it, you should see it at full size in a new window.

I’ve posted the BMP on my website, so you can download it and look at it in Windows Paint or some other pixel-level graphics tool in which you can be sure nothing funky’s being done to the original.

I’m going to write a lot more about running Windows on this great Macintosh. The 17″ screen and 133ppi display are terrific.

Dell shipped 133ppi displays about 12 years ago, of course, and has shipped displays as high as 147ppi in laptops ten years ago (Inspiron 7500 or 7800 – I forget which). But the brightness and clarity of the Mac display are something else. I opted for the high-gloss, glass screen rather than the anti-glare one. So my screen looks like a giant iPhone turned on its side…

However, neither Apple nor Microsoft – nor many websites, including Blogger – uses the screen real estate that’s available (1920 x 1200 pixels).

133ppi is definitely a threshold. Once your display is beyond that, then “traditional” websites – those designed under the outdated assumption that all displays are ~96ppi – start to break all over the place.

Windows Vista allows me to set the Font DPI to the actual screen DPI. But when I do it, websites really start going crazy. It’s not the fault of Windows, which is doing the right thing. It’s the fault of the applications and the website creators.

I’ve had to fall back to 120ppi as my Font DPI because of this problem.

This underlying assumption of ~96ppi is at the heart of many issues. The assumption has been invalid for many years now, and the problem will get worse as more and more high-res devices and displays appear.

It really is time to make computing resolution-independent.

Leaving Microsoft: The Journey Continues…

Well, it’s official now. I’m leaving Microsoft.

It’s time to begin the next stage of a mission that began for me in the early 1980s – when I realized that computers were about to change the publishing industry radically and forever. I helped to drive the desktop publishing revolution that changed high-quality printing and made it accessible to anyone with a personal computer.

I guess the second stage began when I first saw hypertext in 1985, while writing the user manual for Guide, the first Macintosh hypertext authoring program (those were the days when software needed a thick printed manual).

Hypertext was supposed to replace paper. But everyone promoting it had forgotten the one basic flaw in the reasoning – reading from the screen was so bad that everyone would still print information in order to read it.

The journey brought me from Scotland to the Pacific Northwest, to work at Microsoft, which I believed was the one company in the world best-placed to lead the transition from reading on paper to reading on a screen.

It may sound trite, but so many millions of people worldwide use Microsoft Windows and Microsoft Office that it’s a truism: Change Windows and Office, and you change the world.

There are probably a billion people worldwide with ClearType on their PCs and other improvements like the onscreen reading view in Word.

I’ve had the opportunity to work with many clever folks at Microsoft. Together, we have driven a lot of change. I want to publicly thank the ClearType team at Microsoft, most of whom I’ve worked with since I joined the company over 14 years ago. They drank the Kool-Aid before anyone, when they worked for me in Microsoft’s Typography group. Back in 1995, we produced a plan together which focused us on reading from the screen. The Verdana and Georgia fonts were the first fruit of that work. And they’re still believers.

I’d like to thank Charles Torre of Microsoft’s Channel9. It has been a huge success, and it has always been a delight to work with him. If you want to see Bill Hill videos, Channel9 is the place to go. Together, they’ve had hundreds of thousands of views.

It has been a privilege to know and work with all at Channel9, and I hope we’ll stay in contact.

The job of making the screen as comfortable to read as paper is not yet completed. I’ve come to believe that it is the development of Web standards, and standards-based rendering, which will take us the rest of the way.

There’s huge potential. Two trillion pages are still printed in the US alone, every year, and that’s an enormous waste of energy and resources.

For some time, I have been preparing for leaving Microsoft by setting up my network and communications. You’ll find me on FaceBook and LinkedIn, as well as on my website and this blog.

As you all know, I’m a Man With A Mission. I have no intention of becoming a beach bum. I always said that I would probably go back to writing, which I did professionally for almost as long as I worked in the software industry. I’ll continue with my blog and website.
I’ve become convinced over the past couple of years that no one company or browser will make the transition to reading on screen happen. I still believe in eBooks. Amazon has definitely seized the lead there, by providing the two things which were both essential to success – a device and a bookstore.
I have some other ideas I’m not yet ready to talk about. And of course I’m available as a consultant.
I’m facing the future with what is probably the right mix of fear and excitement. It has always felt like this is destined to happen, it’s a lot bigger than me, and I’m not in control.
In the past, what seemed like the worst thing that could happen has often turned out to be the best thing that could have happened.

Reading on Screen IS The Future of Reading. People thought I was mad when I tried to tell them that back in the 1980s and 1990s. Now we all spend hours every day, reading from a screen. We’ve come a long way.

But we still have promises to keep, and miles to go before we sleep.