We’ve been improving readability in print for about 550 years since Johannes Gutenberg invented moveable metal type, an ink that would stick to it, and a press which enabled multiple impression of a page to be taken.
Gutenberg didn’t invent printing. The Diamond Sutra was the first printed book, in the 10th Century. The Koreans had moveable clay type in 1024. People had been printing from woodblocks for centuries before that.
Gutenberg was the Henry Ford of his day. He mechanized the process, and the ability to use it to create multiple copies of books and other documents exploded first across Europe and then the rest of the world.
Gutenberg’s technology broke the stranglehold on information previously held by the Roman Catholic Church.
Type and page designers have been working ever since on improving readability to the point where today – at least in print – text is at the pinnacle of more than five centuries of evolution.
It’s much, much harder to set type well than to set it badly. Unless you’ve been involved, you have no idea of the complexity. Letter shapes and the way the work with each other, letter spacing. Word spacing. Alignment. Line length. Margins. Page size.
All these factors need to be computed to a precision of about 1/600th of an inch. Why? Because that’s the resolution of human vision.
The apparatus we use to read is a high-precision scanning machine made up of our eyes, the muscles that move them, and our brains.
This machine operates at 600 dots per inch (dpi), takes 20-25 milliseconds to find each scanning target, and scans five targets per second.
Over 550 years, text has evolved to be optimized for this scanning machine, so it can operate in the most efficient way possible.
It wasn’t particularly deliberate or scientific in the beginning. No-one knew how human vision worked. Instead, what happened was a process of Darwinian evolution.
In other words, people just tried stuff (mutations, if you like). What worked, survived, and what didn’t work died along the way. There have been many experiments in type and type technology.
We’ve only been doing onscreen reading for about 23 years. My nomination for the start point was the appearance of a Graphical User Interface for mass-market computing with the launch of the Apple Macintosh in 1984.
It’s easy to put type on a screen. It’s much harder to do it well, properly optimized for human reading. In fact, I’d argue that we have not yet succeeded in creating a truly optimized reading experience on a screen.
As I said earlier, it’s very much harder to set type well than to set it badly. The only reason so much care is taken in print is that badly-set text is not acceptable to human readers.
We haven’t really begun to take the same care on the screen.
Oh, it can be done. All the technologies exist today to do it. But no-one’s ever put them together properly. That’s what I’m trying to at Microsoft, the only reason I joined the company and the reason I stay.
Reading’s a core human task. People use it every day. None of our economic prosperity, science or technology would have happened without it. It spawned the Renaissance and the Age of Enlightenment – so even the political shape of our world today would not exist without the schools of thought which sprung up with the availability of books and education.
Reading’s the first thing anyone has to learn if they want to learn anything, to improve their lot in life.
And it’s undergoing the biggest change in 550 years, as more and more of it takes place as we look at a screen.
It might seem that reading onscreen’s “OK” to you. But it isn’t, to anyone who really knows about text and type. There’s so much that’s wrong that needs to be fixed.
We need to take the lessons we’ve learned in the past 550 years and apply them to text onscreen. In future blog postings, I’ll talk about that, and what I think needs to be done.
550 years getting it right for print. It won’t take us anything like that to create the best-possible reading experience on screen, one that’s every bit as good as – in fact, better than – paper.