Monthly Archives: August 2008

Authoring Web-Standards Pages: Like Setting Type in the Days of Gutenberg…

I’ve just spent the last few days working to build a set of Web pages which are authored to W3C standards (HTML 4.01 and CSS 2.1) They’re now up on my website:

http://www.billhillsite.com/

I’ve done some HTML authoring before, of course. But I always stayed away from anything but the simplest hand-coding. When I wanted to do something more complex (typographically speaking), I’d fire up a publishing tool, which would let me do things in a visual way and then translate what I’d done into code.

This time – at least at first – I didn’t really care about the code itself. I was creating the pages in order to show off how Embedded OpenType (EOT) font embedding could be used to really enhance readability on the Web, by allowing Web designers to use high-quality commercial fonts on their pages, even though readers would not have those fonts installed.

It worked like a charm – at least as far as my goals of showing off the technology were concerned. I even apologized for the code when I posted the pages, and explained that they were merely a vehicle to show EOT actually working.

I wanted to post the pages quickly, to coincide with an announcement by font vendor Ascender Corporation that it was supporting Embedded OpenType (EOT) and launching a new website:

http://fontembedding.com/

I’d have liked to have posted a Web-standards set of pages, too, but there just wasn’t time. My colleague Chris Wilson tried to warn me. He even coded up the first page. But we couldn’t get them done in time for the announcement, so I decided to go with what I had.

Reactions were wildly different. One or two people were really helpful – one offered to give up some of his time to convert the pages to cleaner Standards-compatible code. But some were downright rude. Don’t they get it? If you disagree with someone, but you’re civil about it, there’s a good chance they’ll listen to your argument.

Anyway, I’d already decided that I needed a set of identical Web-standards pages up there. While the offer to make them for me was very generous, it wouldn’t do much to improve my personal level of understanding. So I thought it was time to get my hands dirty. I bought a pile of HTML and CSS books, then set about deconstructing Chris’s page until I’d figured out how it worked. (If you want to learn how to paint, the best way is to study the work of a master…)

You can’t use any really visual tool to do this work – they all end up inserting chunks of their own code, which is inefficient and impossible to understand.

Chris recommended I use Notepad, the basic text editor that comes with Windows. I also tried FrontPage, which worked fine as long as you stuck to the HTML view. I found life a lot easier with line numbers, so Chris told me to try Notepad++, and that’s what I ended up using.

It’s a good tool, and it’s available free on:

http://notepad-plus.sourceforge.net/uk/site.htm

Chris’s other suggestion was to use Visual Studio, but that’s way too heavy-duty for me – I am only an egg (from Robert Heinlein: Stranger in a Strange Land – See BOOKS I RECOMMEND).

My first page was a struggle, as you’d expect. But it was a great feeling when I finally got it to do what I wanted. It was even better when the W3Cs HTML validator service passed it (after I’d tidied up – it caught a couple of orphaned opening and closing tags first time around).

After that, my coding for each of the subsequent pages got steadily quicker.

I think Web standards are great. I’ve been a big supporter inside Microsoft. They still need to be evolved to support really great typography, because the Web is without doubt the publishing medium that will replace paper, and should be capable of doing anything you can do with paper (especially as screens get better).

My experience with authoring standards-compatible pages by hand – and the fact that there isn’t any other acceptable way – really got me thinking.

We’re back in the days of Gutenberg. In those days, all text had to be set by hand. It was a job for the real expert – not something any regular person could ever do. One of the world’s great type designers, Herman Zapf, told a conference a few years ago that no-one has ever yet succeeded, despite all the technology developed since, in setting type as well as Gutenberg.

The key to Gutenberg’s setting was that he used many more ligatures than are in common use today. A ligature (from the Latin root meaning to bind) is a special character which creates a new composite binding together two or more letters in order to make them fit better together. The most common examples are ff, fi, ffl, ffi, etc.

Most typefaces today have only a handful of ligatures. But Gutenberg cast many additional ones which ensured a tight and aesthetically pleasing fit of groups of two, three, four or more letters.

We see character sets in typefaces being extended all the time to support more languages. OpenType has given us the ability to create many more ligatures. But we’re not yet using that capability to its full potential.

Hand-coding standards-compatible Web pages is just like setting type by hand. If I want a true apostrophe, for instance, I have to type ” ’ “, or paste it in. True double or single quotes, em-dashes – they all have to be hand-coded.

How sick is that?

Hand-setting of type was a huge bottleneck in the printing process. It took about 400 years (until Victorian times) before machines like the Linotype and Monotype typecasters were able to take over most of the load.

Of course, access to machines like that was pretty much limited to professional publishing concerns.

The typewriter made it possible for almost anyone to put print on paper – pretty ugly, limited, monospaced print, it’s true, but it was great leap forward.

Since the advent of the personal computer in the 1970s we’ve seen steady progress in the quality of printed material ordinary non-experts can produce. Dot-matrix and daisy-wheel printers have been replaced by laser and inkjet printers. Regular word-processing software like Microsoft Word can turn out very professional documents. The desktop publishing revolution of the 1980s brought software which can do just about anything you want. And you don’t have to be an expert to use any of these – templates and default settings will give you professional results.

And here we are with the Web. It’s the most important publishing medium in human history. Because for the first time it democratizes both the production AND distribution of high-quality content. And you can read it on a screen, keep it up to date, re-purpose it, etc.

But it’s clear to me that we’re still missing a huge piece of technology: an authoring tool which can be used by anyone without expert knowledge( i.e. the ability to hand-code) which produces standards-compatible, professional-quality Web pages.

That’s the Desktop Publishing revolution of the 21st Century, and it’s still waiting to happen…

Introducing the Colophon to the Web: a New Business Model for Fonts?

For hundreds of years, printers and publishers have included a Colophon in books. That’s a section – usually a page, often at the back of the book – which describes which fonts were used to set it, and perhaps gives some history of who created those particular fonts and when.

Here’s an example of the kind of information it might contain. I’ve used the font Baskerville Old Face, which is part of the Linotype Library, and information I reproduced from the Linotype website.

“This book is set in Baskerville Regular Old Face, which was designed by John Baskerville in 1750, and belongs to the Baskerville Font Family, comprising 6 fonts in Windows TrueType format, which is part of the Linotype Originals.

John Baskerville (1706-1775) was an accomplished writing master and printer from Birmingham, England. He was the designer of several types, punchcut by John Handy, which are the basis for the fonts that bear the name Baskerville today. The excellent quality of his printing influenced such famous printers as Didot in France and Bodoni in Italy.

Though he was known internationally as an innovator of technique and style, his high standards for paper and ink quality made it difficult for him to compete with local commercial printers. However, his fellow Englishmen imitated his types, and in 1768, Isaac Moore punchcut a version of Baskerville’s letterforms for the Fry Foundry. Baskerville produced a masterpiece folio Bible for Cambridge University, and today, his types are considered to be fine representations of eighteenth century rationalism and neoclassicism. Legible and eminently dignified, Baskerville makes an excellent text typeface; and its sharp, high-contrast forms make it suitable for elegant advertising pieces as well.”

Baskerville Regular Old Face (graphic from Linotype website)

If you love type as much as I do, you just lap up this sort of information. But even if you don’t, isn’t it cool to find out that the typeface you’ve been enjoying has been around for more than 250 years?

So I always look at the Colophon in a book…

Anyway, as readers of this blog will know, I’ve been trying to drive the establishment of Embedded OpenType as a Web standard which would allow the legal use of high-quality commercial fonts on the Web.

People are stuck today with a limited choice of fonts they can use on their websites which they’re sure will be on the computers of everyone viewing them. But if we can embed any fonts we’ve bought, then the Web will explode with great design and high-quality typography. There’s absolutely no reason your website can’t look as great as the beautifully-set magazine you buy every month.

And that started me thinking: Why not introduce the venerable concept of the Colophon to the Web? Could it be used to drive a new business model for fonts which would benefit the font industry, web developers and designers – and the people who visit their sites?

I’ve run the idea past a few font folks I know, and they’re quite excited about it.

Here’s how it might work:

You’re a web designer or developer, and you want to use a font, or a number of different fonts, on your site. You’ve bought legal copies of all the fonts you plan to use, and they all come with Web embedding rights.

You create a Colophon page on your site which tells users about the fonts you used. But it doesn’t just give their history and interesting information about the font. It also includes a link to the font vendor(s).

If your readers like the fonts you used, they simply click on the link, and it takes them to a site where they can buy the fonts, download them, and start using them right away in their own documents and websites.

Now, you could see how this could be taken further, with a business model like, say Google’s AdSense. If the font vendor wished, they could pay you a small commission every time someone bought a font using the link from your site. The fonts you use might actually end up paying for themselves, or even making you money!

For the industry, Web Font Embedding would change from being perceived by some as “a potential threat to their valuable Intellectual Property” into a marketing, advertising and sales vehicle with the potential to really increase their font revenue by exposing their products to more customers than ever before.

Another alternative thought I had was that perhaps the End User License Agreement for a font with Web Embedding permissions turned on might require the website designer or developer to put a Colophon on their site in return for the embedding permissions, or perhaps a price discount.

I’m brainstorming here, just putting out a couple of ideas. Perhaps developers would find a compulsory Colophon too onerous a requirement. I don’t know. It would be up to the industry and the Web to work out the details and create a workable business model that benefits everyone.

I’d be interested in readers’ thoughts. There are probably even better ideas out there I haven’t yet considered. But I’m quite taken with the concept of fonts as a viral marketing channel. The more people who use fonts, the more people who buy fonts, the more we’ll be sure of a healthy font industry for the next 550 years. And we do need a healthy font industry; there’s lots of work still to be done, as publishing moves from paper to the screen.

Lack of Decent Tools Holding Back "The Web for the Rest of Us"…

I’m going to take a little trip down PC Memory Lane… Bear with me; this isn’t idle reminiscing. There is a point, and I’ll eventually get to it.

“Those who fail to learn from history are condemned to repeat it” is a very valid saying, and I feel that on the Web we’ve forgotten some things we should have learned a long time ago.

I remember the very first time I typed text into a computer, and saw it in halfway decent type. It would be 1984, and I was using MacWrite on an Apple Macintosh.

Up until then, I’d used a few MS-DOS and CP/M PCs with applications like WordStar – for years, the leading word processor – or WordPerfect.

MacWrite was like a breath of fresh air. Pretty basic, but in terms of graphical display of text, miles ahead of anything else at the time. Then along came Microsoft Word, and at last we had a word processor that was both powerful AND created text that was at least part-way readable on screen. What you saw on the screen was a reasonable approximation of what you’d get when you printed out your document.

By 1986, I was working for Aldus Corporation in Edinburgh. PageMaker – by then the world’s leading desktop publishing package – depended entirely on the bet the company had made on the future of the Graphical User Interface (GUI).

Microsoft made the same bet with Windows – and Word for Windows. WordPerfect – by then the world’s leading word processor – was still running on MS-DOS. The company failed to make that leap of faith, and was toppled from its leadership as a result. A GUI version eventually appeared, but it was too little too late; by then Word had achieved an installed base that was pretty much unassailable.

Aldus made the same GUI bet on Windows. There were PC GUI competitors, like Ventura Publisher, which ran on the GEM windowing environment.

I have to confess, the very first time I saw PageMaker running on a PC, it looked like a bad joke. The PC in question was a British Apricot (it was all about fruit in those days…), and it was running a Hercules Graphics card. The Hercules card ran in EGA mode, and created pixels which were rectangles much deeper than they were wide. The result was that a page of a publication created in PageMaker on it was also wildly stretched. The new desktop publishing and word processing software all depended on WYSIWYG (What You See Is What You Get), which allowed you to accurately place text and graphics. On a Hercules card it was a disaster.

However, along came VGA graphics cards for the PC, and at last we had square pixels, color (although we Brits spelled it colour, of course), and acceptable WYSIWYG. The Macintosh, of course, had been designed with square pixels from the start. There were also 72 of them to the inch, which mapped very nicely to printers’ measures, in which there are 72 points to the inch.

That resolution worked in a world where everything was printed out at 300 dpi or better. But it was way too coarse for reading on a screen.

The PC industry benefitted hugely from Moore’s Law, which stated that the processing power of the CPU would keep on doubling every two years while its cost would halve. Unfortunately, Moore’s Law didn’t apply to PC graphics. Dell shipped a 147ppi laptop about ten years ago, so it took ten years for pixels to double. But the price didn’t come down, and software manufacturers (including Microsoft) failed to see the opportunity at that time, didn’t bring out resolution-independent Operating Systems and applications, and the result was a suboptimal experience on high-resolution displays.

That’s changing now, but it’s taken far too long. For years, a large proportion of PC users with high-resolution desktop and laptop machines have been running them at lower resolutions in order to make text and icons display at comfortable sizes.

For years, the computer industry kept on pretending that all PCs had a resolution of around 96ppi.

Unfortunately, the Web’s largely been doing the same. That’s why we see websites which have fixed pixel sizes, and contain measurements in pixels. Try looking at one of those sites on a 204ppi display, though, and the problems scream out.

The pixel is a relative dimension – it depends on the resolution of the PC on which it’s being displayed. Human vision, though, depends on absolute measurements – because the fovea, which we use for all high-resolution work like reading, is 0.2mm in diameter. That’s true for the entire human race – there aren’t some folks with 0.1mm foveas and some with 0.3mm; foveal vision is all around 600ppi resolution. (See earlier postings in this blog if you’d like more detail).

So that’s the first mistake we’re repeating.

The second is this: When I typed text into Word, or laid out a page in PageMaker, I didn’t have to do any coding at all. All the coding took place behind the scenes – I just provided the content.

Most people have content they want to communicate. In the past it was documents, now it’s more and more about Web content.

Sure, there are visual authoring tools out there. But most of them still encourage fixed-pixel-dimensions. And they also put large chunks of their own proprietary code into the content.

There were a few factors that made this happen. First, the formatting capabilities of HTML were pretty rudimentary in the beginning. I remember the first time I saw the Mosaic browser. You could have any typeface you wanted, as long as it was Times…

The only way you could get text in an unusual typeface was to create a graphic containing it. It was fixed-pixel, of course, and wouldn’t scale. Designers also tried to drag the 35,000-year-old First Law of Design into the Web: First, fix the size of the space you’re going to fill.

HTML’s become a lot more powerful, especially with the addition of Cascading Style Sheets. And there’s an opportunity to get back to the ideal behind it – that content and formatting should be kept separate – and still have high-quality formatting, layout and text composition.

Web-standard markup is clearly the way to move forward. But something really important is missing. To achieve Web-standards markup today, you have to code your content by hand in a plain-text editor like Notepad.

That’s just ridiculous! It’s like going back to the days of WordStar, when if you wanted to format your text, you had to you use all kinds of esoteric keystrokes to put special codes into your content. In fact, it’s worse – WordStar at least had keyboard shortcuts. To create valid HTML, you have to know all the codes and enter them by hand.

Here’s my wish list:
  1. A WYSIWYG HTML editor which displays text in your browser of choice and shows you the changes interactively, BUT
  2. Writes only Web-standards HTML and CSS which validates using the W3Cs HTML validator service
  3. Inserts no code which will not validate
  4. Supports adaptive layout – so you can interactively see what your site will look like in a window the size of a cellphone, or a cinema-sized display – and anything in between (perhaps to cover extreme cases, allowing you to create a special CSS stylesheet for very small or very large windows)
  5. Detects the display on which your page is running so it can make intelligent layout decisions
  6. Supports the highest-quality typography possible
  7. Supports Font Embedding, so users who don’t have the fonts you want to use installed will still see them (but doesn’t enable or encourage easy font piracy).

I’ll probably add to this list as I learn more.

Today, to get the ability to “just write content – not code”, I’m forced to use a blog hosting environment like this one. But the layout and readability is pretty poor. I have only the shallowest control over layout and typography. It’s the best I can find today – but it’s nowhere near good enough.

Blogger archives my content. But I want to be able to create my own content archive, and then build different ways people can view it. A blog forces you to read articles by “Date Posted”. Sure, you can explore the archive, but not systematically.

I’d like to be able to create “An Issue” – like the issue of a magazine – containing a “set” of articles. I’d like people to be able to read from “Oldest to Newest”, or with articles grouped “by Subject”. All of these things are easily possible at runtime, using meta tags.

I made those funky Web pages as an experiment. To try and get a handle on what’s wrong and how it might be fixed.

I’m on a journey of discovery right now. I’d be glad to get ideas from other people on these topics – and others, because I’m sure there are plenty of issues I haven’t yet encountered.

If someone’s building a tool like this, I’d love to try it. If not, I’ll keep collecting requirements in the hope that one day I can interest the right people.