Future Directions

In this section, I’ll look at some of the larger issues facing web services, including the complexity of development, the need for a business model, security, and finally present a mental model for tracking the evolution of web services over the near future.

Lowering the Bar

In this text, we’ve focused on Java as our development language of choice. Over time, a variety of scripting languages have appeared, promising to broaden the range of programming and development to a wider audience. Many vendors of development tools talk of a three-tiered model, with “ordinary users,” “scripters,” and “software developers” occupying the three ranks. Sometimes, this is broken down as “HTML,” “scripting,” “procedural,” and “object-oriented” development.

In some ways, it would be more accurate to break down the complexity of web service development along the lines of client development and server development. This text has focused almost exclusively on client development—not web service server development. Applications may function as a server relative to the user’s web browser, but we have not attempted to provide our own web services (save for an RSS feed).

As the component model for web application development matures, it’s possible that the use of the more complex web service infrastructure such as SOAP and WSDL will be sufficiently encapsulated so as to make the development of web service applications accessible via visual builder tools. For example, several commercial tools now available allow a user to specify a WSDL and automatically generate web service components.

Understanding the Business Model

Almost all of the web services offered by various companies are provided for free in conjunction with an offline service of some kind. For example, FedEx provides the web services for free, but they charge to actually ship something. Similarly, Amazon offers free web services (indeed, they pay for you to publish material using their associates program), but they charge to actually ship products.

An open question is the viability of charging for web service access in and of itself. For example, eBay keeps track of the overall usage of the web services system and charges fees based on the access. In practice, this means that eBay is charging auction fees and also fees to view and post auctions. It’s unlikely that eBay would have been successful if had applied a per-page-fee formula to ordinary web browsing.

This is very important and very significant for the future of the Internet. For example, an inexpensive pay-per-access web service model could pave the way for a micropayment system for access to content and software. Alternatively, it could also provide for richer integration between a client and a server for specific applications.

It introduces the potential for a richer economic model between a publisher of a service and desktop software. For example, a vendor that sells music online might offer a set of interfaces to access the system. The makers of desktop software could then create new clients to access the system with different capabilities, obtaining a commission for music sold through the system. In this fashion, a richer economic model is created, with more customer choice.

To date, this has been limited by the tendency of a corporation to want to own the entire chain of experience between themselves and the end user. This is the difference between the vendor of an application or a point product and a platform.


Security in web services is still a somewhat mixed system. Throughout this text, we’ve seen a variety of approaches, and this variety is likely to continue for some time. Some services support HTTPS- and SSL-based connection encryption. A minor gesture is made toward security with the use of an MD5 hash for passing along passwords. There are a variety of tokens required to access different systems. At minimum, the following are all required to access a service: a single developer token, an authentication token, an application token, a user account, and a password.

It’s easy to criticize the efforts with regard to security that have been made, but one of the biggest advantages of web services is the very reliance on the underlying standards. For higher security environments, it’s easy to envision using existing technologies such as SSH or a VPN. Proxy servers and services can provide for robust logging and debugging capability. The same XML that SOAP relies on makes it easy for a proxy system to inspect the messages being sent.


The wide availability of SOAP clients and servers and the broad industry support from both small and large vendors is a very positive sign. XML-RPC also enjoys wide support, and the simplicity and stability of the specification mean that many of the libraries used are also stable. RSS also enjoys widespread support, but the sheer variety of feeds and the various “interpretations” of the specification make it much harder to work with than one might hope.

Interestingly, all of these rely on XML. There is a rich tradition of text-based protocols underlying the Internet, and XML allows a developer to provide a text-based representation without having to write a parser (one of the nastier, more error-prone pieces of software one can write). Even REST, one of the more aggressive alternative approaches, relies on XML as a primary data type.

The bright side is that virtually all vendors have agreed on XML as a universal glue. With the exception of CDDB, every service in this book uses XML in some fashion. It’s likely that all these systems are going to move along a more or less predictable axis toward SOAP and WSDL as the developers of both the client and server side of the software get tired of reinventing the wheel. For example, why bother constantly recreating binding layers between your preferred development language and your server when simply standardizing on SOAP and WSDL gives you multilanguage bindings “for free?” Indeed, many systems automatically provide WSDL bindings automatically when you build a SOAP service.

The next stage, raw (or very loosely formatted) text, typically piped over TCP/IP, is much easier to work with than a binary format. Prior to the popularity of XML, this was the de facto standard for virtually all Internet protocols. FTP, HTTP, NNTP, Gopher, SMTP, MIME are all based on specific text markers for delimiting data. For example, if you browse through the IETF site and looking at the original specification for HTTP (at http://www.ietf.org/rfc/rfc1945.txt?number=1945), you’ll notice that tremendous attention is paid to low level detail (the same sort of detail and Backus-Naur Form [BNF] grammar rules that one might expect from a compiler specification). The Internet Engineering Task Force (IETF) lists a tremendous number of low-level protocols that form the technological backbone of the Internet, but as developers began working with HTML and XML, the conversation moved beyond low-level BNF notation into a more accessible realm.

The popularity of the World Wide Web, specifically, HTTP, meant that virtually every major programming language soon included the ability to open a HTTP connection. Instead of engaging in the IETF-level semantics of a protocol, a developer could merely open a connection and get back data. XML, a stripped-down version of SGML, looked a lot like HTML but was easy to validate. A number of XML parsers were developed for a variety of languages, and suddenly, we could all move data back and forth very easily.

XML-RPC codified this notion into something that looks a lot more like a remote method call. Notice that there is no true object-oriented aspect to XML-RPC— no notion of inheritance or polymorphism, for example. This ensured that XML-RPC was accessible to nonobject-oriented systems, and that support would be available from a variety of systems.

SOAP, despite the original notion of “Simple Object Access Protocol,” has over time acknowledged that it neither truly simple nor object-oriented, and the protocol officially no longer has any meaning (it’s an acronym without an expansion). While the implementation of supporting libraries is more difficult, as we’ve seen it’s not tremendously more difficult for a developer to either publish a SOAP service or use it.

The final addition of WSDL makes an application much easier to work with—as time has passed, virtually every modern development environment now has support for SOAP (at least as a client) and WSDL. Even such limited environments as Palm OS (http://www.palmos.com/dev/tech/webservices/) and PocketPC (http://www.pocketsoap.com/ or http://msdn.microsoft.com/library/en-us/wcesoap/html/ceconSOAP.asp) now support both SOAP and WSDL.

There will likely be a period of time in which an enterprising developer can offer bridge services between less mature services and the richer world of SOAP and WSDL (for example, it’s easy to envision offering a SOAP view of the CDDB system). However, as of this writing, many of the organizations that currently offer non-SOAP systems (such as eBay) have already publicly indicated their desire to move to this environment.

The future of web services beyond SOAP and WSDL is far less clear. Some discuss the notion of choreography, which gives definition to the back and forth communication required to perform a transaction. Others point to a need for more coherence in terms of the actual APIs provided, or the details of what is meant by a transaction (for example, industry agreement over what is meant by a “purchase order” would make it easier to process an order across a supply chain). Savvy businesses and developers are too cautious to embrace a complex standard prematurely, and a lack of a clear, authoritative, credible organization has impeded efforts. Upon occasion, a very large organization is able to mandate a “standard” to partners, but typically these efforts are locked in as a proprietary system, not put forth as an open system.

It’s not an accident that most of the popular systems (for example, XML-RPC, SOAP, and WSDL) have had complete implementations donated to a trusted, well known organization such as the Apache Group. Even if the software available isn’t the best (although sometimes it is), a developer can work with the software, bundle it into their application, and otherwise take advantage of the “plumbing” provided without feeling encumbered by any potential proprietary system.

Probably the only real predication that can be made with any degree of certainty is the eventual coalescence of web services around SOAP and WSDL. Many of the services that don’t implement SOAP already implement systems very closely. While SOAP and WSDL may never be as prevalent as HTTP and HTML, they will almost certainly eventually play as important a role in your development toolkit as TCP/IP or SQL.

Real World Web Services
By Will Iverson
Publisher : O’Reilly
Pub Date : October 2004
ISBN : 0-596-00642-X

Developing The Right Look

Your Web site is a virtual version of your store or office. The way your workplace looks has a direct impact on how customers or clients feel about you. The same goes for your Web site.

Here are some things to consider when developing the right look for your Web site.
First Impressions Count

Here’s a scenario for you.

Say you need some legal advice and a friend has recommended a lawyer. When you arrive for your appointment, you discover that the lawyer’s office is in a storefront on the bad side of town. In the reception area, the carpet is worn and dirty and paint is peeling off the walls. The receptionist’s desk is missing a leg, so a whole corner of it is being propped up with fat law books that look like they were used for a cat’s scratching post after they’d been in a flood. The receptionist, who is chewing a huge wad of gum, tells you to wait on one of the folding chairs set up against a wall. All the magazines on the TV table beside it have their covers torn off. What are you going to think about that lawyer?

Your Home page is like your store’s entrance or your office’s reception area. It’s what people first see when they visit. Don’t you think it’s important to give visitors the right first impression?
The Limitations of HTML

Most Web pages are created with HTML, a markup language that, when interpreted by a Web browser, displays the page as the Web author intended. Or close to it. Or maybe nothing like it at all.

You see, HTML has limitations in the way it displays information. It’s important to know and understand these limitations when designing your Web pages.

HTML was never intended to handle page layout. As a result, it’s very difficult, if not downright impossible, to create a Web page that exactly replicates a complex print document such as a brochure.

A Web page can be any length. It can also be any width. Word wrap is normally determined by the width of the Web browser window. That means that changing the width of the browser window can change the appearance of a Web page in that window.

Fonts appear larger in Windows browsers than in Mac OS browsers. As a result, a page created by a Web author on a Mac OS system seems to have large fonts when viewed on Windows. Likewise, a page created by a Web author on Windows can have very tiny fonts when viewed on Mac OS.

The fonts that can appear on a Web page are determined by the fonts installed on the site visitor’s computer. So if you set up a page using fonts that the visitor doesn’t have, text will appear in the default font. And the visitor can override special fonts anyway, to display all text in the font he prefers.

Different Web browsers support different HTML tags. For example, Explorer supports the MARQUEE tag; Netscape does not. Similarly, Netscape supports the BLINK tag; Explorer does not. (Frankly, I find both of these tags rather annoying—and that’s a better excuse for not using either one.)

Older Web browsers do not support the most recent HTML tags. That means a Web page using HTML version 4.0 (the current version as I write this) won’t look the same on an older browser (say, a Netscape or Explorer 2.0 browser) as it does on a current browser.

A smart Web author can overcome some of the limitations of HTML by intelligent coding. This, however, can cause other problems. For example, to fix the page width, all page information can be enclosed in a fixed-width borderless table. But this won’t work for someone viewing the page with a very old browser. And if the width is fixed wider than the visitor’s screen width, he’ll have to scroll from side to side to see everything. (No one likes doing that.)

I guess the point I’m trying to make here is that you can’t approach Web site or page design thinking that you’ll have complete control over appearance. You won’t. Instead of forcing your Web authoring software to imitate the tools available in page layout software, use the tools available within HTML to build pages that attractively and effectively communicate your message.

That’s all you can do—and it’s enough.


Build a Company Image

Maybe your storefront or office isn’t much different from that lawyer’s reception area. Does that mean your Web site should be equally unimpressive? Of course not.

You can use your Web site as an image-building tool. Through the use of graphics, color, and writing style, you can make your Web site project the image you want to the people who visit.

Of course, a homegrown, amateurishly designed Web site can make your business look downright awful to visitors.

Using Graphic & Multimedia Elements Wisely

In my humble opinion, more Web sites have been ruined by the improper use of graphics than any other folly. In some instances, it’s the result of having the site built by an amateur who couldn’t design his way out of a paper bag. In other instances, it’s the result of a talented designer being in love with graphics he can access at T1 speeds. It irritates the heck out of me and is the main reason I don’t spend more time surfing the ‘Net.

The Golden Rule

First, the golden rule of using images, graphics, and multimedia elements: the element must add something to the Web page without costing more than it’s worth. Cost doesn’t have anything to do with money. It has to do with how much time it takes for the element to appear in the visitor’s Web browser window, how much extra effort the visitor must spend to view the element (by downloading and installing plug-ins, etc.), and how annoyed the visitor might be that he wasted his time and effort to finally see the element.

Every element you include on a Web page must be worth more than it costs.

When evaluating worth and cost, you must be objective. Yes, it would be really cool if your Home page used a background image that was a photograph of your storefront. But how useful is it? Would visitors be able to read the text sitting on top of it? How much time would it take for the image to appear? If you want to show how beautiful your storefront is, wouldn’t it be better to use a smaller image, possibly on the page where you provide your address and driving directions?

Big is Bad

Ah, how I wish I could pound this concept into the head of Web designers all over the world. So many of them still don’t get it.

Big images usually come with big file sizes. While it’s possible to minimize file size by optimizing the image for Web use, there’s only so much you can do. At the same time, a relatively small image that isn’t optimized can also have an unnecessarily large file size.

At this point, you may be wondering what the big deal is. After all, your ISP may allow you 100 MBytes of hard disk space for your Web site. What’s wrong with a few 100 Kbyte files?

There’s nothing wrong with it as far as your ISP and Web server are concerned. It’s the site visitor—remember? The person you’re trying to provide information to?—who won’t like it. You see, in order for an image to appear on a Web page, it must be downloaded from the Web server to the visitor’s Web browser. The speed at which the image downloads is determined primarily by the speed of the visitor’s connection to the Internet.

Studies have shown that the average Web surfer will wait less than 10 seconds for something interesting to appear on a Web page. If you fill your pages with fat images that take a long time to download, it isn’t likely that the visitor will stick around to see your page at all.

Multimedia Madness

Multimedia elements include animations, movies, and sounds. Like static images and graphic elements, they can make your site look more visually appealing and interesting. They can also provide information about your products, services, or company. But they can be very costly in terms of file size, download time, and convenience.

When multimedia effects are overdone or done incorrectly, they can make your site sluggish and unprofessional. If you surf the Web, I’m sure you’ve visited sites with pages that automatically load (or attempt to load) fancy animations or movies. Did you want to see that animation or movie? Maybe not. Yet it was forced on you when you went to the page. The Web designer assumed that you’d take the time to download and view it. (But you showed him. One look in the status bar to see how big the file was and you clicked the Back button and got out of there fast.)

Multimedia elements (beyond simple quick-loading animations) should never be forced on a site’s visitors.

Tip: If you do include large multimedia elements on your site, make them accessible by links. Clearly indicate the size of the file that will be downloaded when the link is clicked, as well as whether any special software is required to view it.

My Take on Sounds

Sound is another multimedia element that’s often used incorrectly.

Here’s an example. A center for bulemia and anorexia hired a Web designer to build a site with information about its main facility. The Web designer included music on the site’s Home page, so when the Home page appeared, music would automatically play. Sounds neat, huh?

Not to everyone. Consider the worried mother who is at work, using her office computer on the sly to explore treatment options for her sick daughter. What do you think will happen when the sound of music starts coming out of her cubicle? Not only will her co-workers find out about a personal family problem, but she could be in danger of losing her job. Clearly, unexpected sounds should not be included on a Home page—or any other page, for that matter.

What’s the proper use of sound? I can think of a few things. Obviously, if sound is part of your business—for example, if you’re a musician or run a record company—it could be included on the site. Sound bytes can also be used to provide information—comments made by the company president at a recent press conference, for example. I’m sure you can think of other appropriate uses. But if you have to stretch your imagination to think of them for your business, they might not be appropriate after all.

Tip: If you do decide to use sounds on your Web site, make them accessible by links. Clearly indicate that a sound will result when the link is clicked.

The Importance of Consistency

Does your company have a letterhead? Envelopes? Business cards? Do they all pretty much have the same design, complete with your company’s logo?

They should. The consistent design of these basic printed materials helps enforce your company’s identity. The inclusion of a company logo adds branding, further enforcing identity.

Your Web site should be the same. Not only should its overall design be consistent with your existing printed materials—including typefaces (when possible), colors, graphics, and logos—but each page should have the same basic design.

Consistency in appearance from one page to the next can help tie your site together. Once you develop an overall design for your site’s Home page, you should use the same general design on the remaining pages. Then there’s no question what site the visitor is viewing when he clicks a link. Either he’s on another page of your site or he’s on a different site altogether.

Putting Your Small Business on the Web
The Peachpit Guide to Webtop Publishing
By Maria Langer
Publisher : Peachpit Press
ISBN : 0-201-71713-1

A Little Background on the Web

A desire to represent this connective aspect of information was a driving force in 1989 for Tim Berners-Lee, a physicist in his mid-thirties working at CERN when he conceived the World Wide Web. “Inventing the Web involved my growing realization that there was power in arranging ideas in an unconstrained, Web-like way . . . A computer typically keeps information in rigid hierarchies and matrices, whereas the human mind has the special ability to link random bits of data.” The idea Berners-Lee pursued was to program computers to create a space in which they could link otherwise unconnected information. And the users of these linked documents from connected computers could become much more knowledgeable and more creative.

This approach to computers is a philosophical change from the way we generally think about computing. The principles underlying the Web are a fundamental change from the way people previously viewed and used information. A shift in the way we think about and connect with one another. A change in the way marketers think about marketing, and how businesses, institutions, and governments think about communications.

The foundation of the Web stretches all the way back to 1945 when Vannevar Bush, an engineer from MIT, head of the Wartime Office of Scientific Research and Development under Franklin D. Roosevelt, wrote an article in the Atlantic Monthly, which focused on global information sharing. He envisioned a personal, searchable machine for storing and cross-referencing microfilm documents with information “trails” which linked to related text and illustrations. His Memex machine was never built, but the concept of organizing information similar to the way the brain worked was not forgotten.

In 1965 Ted Nelson, a visionary who developed a “non-sequential writing system” while at Harvard, presented a paper at the Association for Computer Machinery conference in which he talked of “literary machines” that allowed people to publish documents in what he called “hypertext.” He described Xanadu, a project which would contain all of the world’s information published in hypertext, allowing the reader of one document to link out to other related documents following the reader’s train of thought. Nelson, like Bush, was too far ahead of his time. Xanadu was never realized.

Throughout the sixties and seventies, numerous other people pondered the complexities of our ability to share growing amounts of information. Several ideas in publishing and computing eventually jelled to provide the underlying structure for the concept of the Web. Historically, writers, editors, or graphic designers in publishing, would “markup” a manuscript for typesetting. This essentially told the typographer what size typefont, length of line, and how much spacing to use when setting the type. There were also marks that described page elements or the format to be used. As we progressed to word processing with electronic text, people at IBM developed this into an electronic tag known as Generalized Markup Language (GML), which gave meaning to page elements. By separating the presentation of a document from its content, GML provided a way for many people to edit, share, and reuse the text. More importantly, it was developed so multiple electronic devices could share it. The concept quickly spread within the publishing and computing industries and sometime in the mid-eighties became the Standard Generalized Markup Language (SGML).

The notions of separating content from its structure and of using names for markup elements to identify text objects descriptively—a formal grammar to describe structural relationships between objects—was the basis for the future development of the Web. Understanding the interplay of content with its context at a structural level is fundamental to grasping the “mechanics” of the Web. And as we shall see later, it is also essential to understanding the global creative possibilities of the Web.

Most of us do not think of such things when we sit at our computers. We see computers as highly organized, rigid, dumb machines that may help us accomplish certain tasks if we are patient enough to learn the procedures. Although they began as mechanical counting devices, a means of calculating or “computing” mathematically, following the information explosion of the seventies and the desktop revolution of the eighties, which was stimulated by the use of SGML, we have viewed computers primarily as information storage devices.

In the past, most of us thought all of the information on our computers were proprietary. Even if connected to a local area network (LAN) within the office, or wide area network (WAN) within the organization, “information silos” was—and to a large extent still is—the prevailing structural view. We “drill down” through a data hierarchy to find the document we want. Though an individual or a company may own some of this data, much of it is thought of as proprietary simply because of this storage structure.

Like the hierarchical management of corporations, the structure itself has a powerful and sometimes debilitating influence on the ability of a company to perform. Connections, when they can only be made in a linear fashion—up and down the command ladder—may be rational and strengthen control, but do nothing for spontaneity or creativity. Changing the structure of information expands the creative potential. As any designer knows, most new ideas arrive serendipitously. And anyone who has spent a little time on the Web—having gotten beyond the preconception that computers cannot contribute to imagination— knows it is a tremendous creative resource.

With the introduction of three simple tools, Berners-Lee changed the way we think about information. He did not ask us to give up ownership. His view was that the Web was like a market economy where anybody could exchange information with anybody, from anywhere, and in nearly any form. All that was needed for the exchange were some basic standards everyone could agree on.

The three tools he provided, which have now become universal standards are:

Universal Resource Locator (URL). This is simply a method for locating documents by an address, similar to the way the postal service delivers your mail to a street number, city, state, and zip code.

Hypertext Transfer Protocol (HTTP), is a standard for how computers speak to one another.

Hypertext Markup Language (HTML); a simple coding methodology—simplified SGML—that allows people to specify what a line of text may do in a document (i.e., appear as simple text, a large headline, or as a link to other text or another document).

In August of 1991, Berners-Lee placed his tools on the Internet making them available to all who were interested. This was the beginning of “information space.” The world has talked about, played with, worked with, complained about, and exclaimed the virtues of this space for over twelve years now, but the concept and its implications are still hard for many to grasp. The Web is not a “place.” It is not a “thing.” There is no central computer, single network, or single organization controlling it.

Although the World Wide Web Consortium (W3C, www.w3c.com) — a nonprofit organization founded by Berners-Lee in 1994—does provide “guidelines” for Web development, the Web is essentially an enormous, unbounded, chaotic world of information. By connecting documents from all over the world, information has not only grown; it has changed. The Web is not just providing more information; it is not only a giant library, or a new publishing medium, or a marketing method. It is what Michael Dertouzos, director of the MIT Laboratory for Computer Science, calls “a gigantic Information Marketplace, where individuals and organizations buy, sell, and freely exchange information and information services among one another.”

Chaos theorists would call this change in information structure a “phase transition.” Something like what happens to water when it changes into ice or steam. It has not only gotten hotter or colder, it has changed fundamentally—at a molecular connectivity level. Another “phase transition” in communications was the invention of the telephone. Although not intended as such, the telephone was one of the first great idea of connectivity. Alexander Graham Bell originally thought of it as a broadcast medium, but to his surprise, the telephone provided real-time, two-way communication, a revolution in information exchange that we often overlook because we use it so frequently.

With the exception of the telephone and its forefather, the telegraph, just about all public communication channels prior to the Web were one-way streets. Radio and television broadcast information over airwaves. Newspapers and magazines broadcast via print. Most business and organizational communications, whether by public channels or private (newsletters, memos, videos, or closed-circuit TV), are essentially one-way transmissions.

The mind-set of advertisers who supported this kind of information distribution was “tell and sell.” Look at any print or broadcast advertising over the last 100 years and you will see one-way talk about the features, advantages, and benefits of products, services and organizations: Here is what we have, what we know, what we believe, and what we think you should also know, believe and buy. Ultimately the goal was to direct the receiver of this information to visit the “store” and make a purchase.

Much of what exists on the Web today, even after twelve years, is still following the old model, when what the Web is about is a different, totally new approach. The Berners-Lee vision was of an information space “to which everyone has immediate and intuitive access, and not just to browse, but to create . . . a universal medium for sharing information . . . ” Sharing is the key word here. After centuries of clinging to what we have and what we know, feeling pride in the ownership of things and knowledge, protecting our knowledge in silos, it is difficult to grasp the extent of change a phase transition requires. It is nearly impossible to reverse our thinking, to open our doors and let others in.

To realize the promise of the Web we need to review everything we are currently doing in business and in our personal lives. We need to let go of many of our favorite habits, old models, best practices, and much of our current language to “make room” for new things, new knowledge, new languages, and new possibilities. As we will see in the following chapters, the real business of Web design is about broadening knowledge, enlarging our capacity for imagination, expanding business markets, creating new opportunities, saving time, reducing costs, and improving the quality of life by connecting people to people. Although most businesses on the Web have not gotten it right yet, the human desire to connect is clearly there.

John Waters
Published by Allworth Press
An imprint of Allworth Communications, Inc.
10 East 23rd Street, New York, NY 10010
ISBN: 1-58115-316-3

  • 1
  • 2

Copyright © 1997-2018 GS Sites dotCom. All Rights Reserved.