“Those who fail to learn from history are condemned to repeat it” is a very valid saying, and I feel that on the Web we’ve forgotten some things we should have learned a long time ago.
I remember the very first time I typed text into a computer, and saw it in halfway decent type. It would be 1984, and I was using MacWrite on an Apple Macintosh.
Up until then, I’d used a few MS-DOS and CP/M PCs with applications like WordStar – for years, the leading word processor – or WordPerfect.
MacWrite was like a breath of fresh air. Pretty basic, but in terms of graphical display of text, miles ahead of anything else at the time. Then along came Microsoft Word, and at last we had a word processor that was both powerful AND created text that was at least part-way readable on screen. What you saw on the screen was a reasonable approximation of what you’d get when you printed out your document.
By 1986, I was working for Aldus Corporation in Edinburgh. PageMaker – by then the world’s leading desktop publishing package – depended entirely on the bet the company had made on the future of the Graphical User Interface (GUI).
Microsoft made the same bet with Windows – and Word for Windows. WordPerfect – by then the world’s leading word processor – was still running on MS-DOS. The company failed to make that leap of faith, and was toppled from its leadership as a result. A GUI version eventually appeared, but it was too little too late; by then Word had achieved an installed base that was pretty much unassailable.
Aldus made the same GUI bet on Windows. There were PC GUI competitors, like Ventura Publisher, which ran on the GEM windowing environment.
I have to confess, the very first time I saw PageMaker running on a PC, it looked like a bad joke. The PC in question was a British Apricot (it was all about fruit in those days…), and it was running a Hercules Graphics card. The Hercules card ran in EGA mode, and created pixels which were rectangles much deeper than they were wide. The result was that a page of a publication created in PageMaker on it was also wildly stretched. The new desktop publishing and word processing software all depended on WYSIWYG (What You See Is What You Get), which allowed you to accurately place text and graphics. On a Hercules card it was a disaster.
However, along came VGA graphics cards for the PC, and at last we had square pixels, color (although we Brits spelled it colour, of course), and acceptable WYSIWYG. The Macintosh, of course, had been designed with square pixels from the start. There were also 72 of them to the inch, which mapped very nicely to printers’ measures, in which there are 72 points to the inch.
That resolution worked in a world where everything was printed out at 300 dpi or better. But it was way too coarse for reading on a screen.
The PC industry benefitted hugely from Moore’s Law, which stated that the processing power of the CPU would keep on doubling every two years while its cost would halve. Unfortunately, Moore’s Law didn’t apply to PC graphics. Dell shipped a 147ppi laptop about ten years ago, so it took ten years for pixels to double. But the price didn’t come down, and software manufacturers (including Microsoft) failed to see the opportunity at that time, didn’t bring out resolution-independent Operating Systems and applications, and the result was a suboptimal experience on high-resolution displays.
That’s changing now, but it’s taken far too long. For years, a large proportion of PC users with high-resolution desktop and laptop machines have been running them at lower resolutions in order to make text and icons display at comfortable sizes.
For years, the computer industry kept on pretending that all PCs had a resolution of around 96ppi.
Unfortunately, the Web’s largely been doing the same. That’s why we see websites which have fixed pixel sizes, and contain measurements in pixels. Try looking at one of those sites on a 204ppi display, though, and the problems scream out.
The pixel is a relative dimension – it depends on the resolution of the PC on which it’s being displayed. Human vision, though, depends on absolute measurements – because the fovea, which we use for all high-resolution work like reading, is 0.2mm in diameter. That’s true for the entire human race – there aren’t some folks with 0.1mm foveas and some with 0.3mm; foveal vision is all around 600ppi resolution. (See earlier postings in this blog if you’d like more detail).
So that’s the first mistake we’re repeating.
The second is this: When I typed text into Word, or laid out a page in PageMaker, I didn’t have to do any coding at all. All the coding took place behind the scenes – I just provided the content.
Most people have content they want to communicate. In the past it was documents, now it’s more and more about Web content.
Sure, there are visual authoring tools out there. But most of them still encourage fixed-pixel-dimensions. And they also put large chunks of their own proprietary code into the content.
There were a few factors that made this happen. First, the formatting capabilities of HTML were pretty rudimentary in the beginning. I remember the first time I saw the Mosaic browser. You could have any typeface you wanted, as long as it was Times…
The only way you could get text in an unusual typeface was to create a graphic containing it. It was fixed-pixel, of course, and wouldn’t scale. Designers also tried to drag the 35,000-year-old First Law of Design into the Web: First, fix the size of the space you’re going to fill.
HTML’s become a lot more powerful, especially with the addition of Cascading Style Sheets. And there’s an opportunity to get back to the ideal behind it – that content and formatting should be kept separate – and still have high-quality formatting, layout and text composition.
Web-standard markup is clearly the way to move forward. But something really important is missing. To achieve Web-standards markup today, you have to code your content by hand in a plain-text editor like Notepad.
That’s just ridiculous! It’s like going back to the days of WordStar, when if you wanted to format your text, you had to you use all kinds of esoteric keystrokes to put special codes into your content. In fact, it’s worse – WordStar at least had keyboard shortcuts. To create valid HTML, you have to know all the codes and enter them by hand.
Here’s my wish list:
A WYSIWYG HTML editor which displays text in your browser of choice and shows you the changes interactively, BUT
Writes only Web-standards HTML and CSS which validates using the W3Cs HTML validator service
Inserts no code which will not validate
Supports adaptive layout – so you can interactively see what your site will look like in a window the size of a cellphone, or a cinema-sized display – and anything in between (perhaps to cover extreme cases, allowing you to create a special CSS stylesheet for very small or very large windows)
Detects the display on which your page is running so it can make intelligent layout decisions
Supports the highest-quality typography possible
Supports Font Embedding, so users who don’t have the fonts you want to use installed will still see them (but doesn’t enable or encourage easy font piracy).
I’ll probably add to this list as I learn more.
Today, to get the ability to “just write content – not code”, I’m forced to use a blog hosting environment like this one. But the layout and readability is pretty poor. I have only the shallowest control over layout and typography. It’s the best I can find today – but it’s nowhere near good enough.
Blogger archives my content. But I want to be able to create my own content archive, and then build different ways people can view it. A blog forces you to read articles by “Date Posted”. Sure, you can explore the archive, but not systematically.
I’d like to be able to create “An Issue” – like the issue of a magazine – containing a “set” of articles. I’d like people to be able to read from “Oldest to Newest”, or with articles grouped “by Subject”. All of these things are easily possible at runtime, using meta tags.
I made those funky Web pages as an experiment. To try and get a handle on what’s wrong and how it might be fixed.
I’m on a journey of discovery right now. I’d be glad to get ideas from other people on these topics – and others, because I’m sure there are plenty of issues I haven’t yet encountered.
If someone’s building a tool like this, I’d love to try it. If not, I’ll keep collecting requirements in the hope that one day I can interest the right people.