Why standards fail

July 21st, 2007 by Tom Elrod

To understand why standards fail, first have to look at what standards ultimately achieve which is choice of implementation/vendor for users. An example of where choice due to standards comes into play is choosing a web server verses choosing an AOP framework. When considering which web server to choose will be evaluating based on merits of the implementation such as reliability, cost, easy to use/configure, etc. When looking at AOP, the previous evaluation criteria come into play but also must consider the risk of being locked into a particular flavor of AOP provided by a vendor (i.e. AspectJ, JBoss AOP, etc.). If choose a particular vendor and then later decide move to another vendor, the cost of migration will be much higher since is not standards based will likely have to re-write a good portion of the work already done for the initial vendor to make it work with the new vendor. There are a number of other reasons use of standards based technology is good such as easier to find talent already familiar with the technology, larger pool of support resources, etc.

Hopefully everyone agrees that standards based technology is generally good for end users, so what’s the problem? The two main problems with specifications are 1) the process takes too long and 2) the end result does not meet the needs of the users. Both of these problems generally stem from the fact that standards bodies are almost always driven by companies. These companies favor their business interests over the interests of their customers (as expected).

Why specs take forever

Any time you bring people together to make joint decisions it is going to take a while to reach consensus. The more people you add, the longer it takes. However, often delays in reaching consensus on specification items are not based on differences in technological philosophies, but based on how close vendors are to already having that item already implemented within their product. If they are a long ways off and know that their competitors are close, they will delay it (technology filibuster). This allows the vendor to catch up with their development so are not playing catch up in the market after the final spec is released.

I personally think an example of where this occurred is with the EJB 3.0 spec. This spec basically kicked off in earnest early 2004 and had two draft releases within a year. At that point, the spec was in pretty good shape and seemed to be not too far from being completed. However, it was another 14 months before reached final approval. If you are not familiar with the EJB 3.0 spec, it would basically call for an implementing vendor to have some sort of ORM technology and an architecture for implementing annotations (which was AOP based for most vendors). There was a particularly large vendor participating in the spec that did not have these technologies available internally when the drafts were released. I’ll let you connect the dots…

Why specs are weak

Often specifications are produced that cover such a small set of functionality that they are relatively useless. Specs that fall into this category usually started with much loftier goals, but the feature set got whittled away until it was left without any bite. One of the main reasons this happens is because vendors cannot agree on features that they believe they can or should implement. In an effort to move the spec forward, the group will concede to either removing the item from the spec (or when we’re lucky, mark as optional). Again, the vendor wanting the feature to be removed either does not want to be left out in cold to play catch up with everyone else or the vendor wants to leave the feature out of the spec (in the case they already have it implemented) so can be considered a “value add” that they can charge a premium for. A good example of the “value add” angle is there not being anything mentioned in the J2EE spec for clustering other than the ‘distributable’ tag within web application deployment descriptor. Most modern application servers support clustering for web applications, ejbs, jms, etc. So only reason I can see for this not being standardized is so vendors can charge more for their “clustered” version and locking users in.

Impact of delayed and weak specifications

In the big picture, users just need what they need to get their job done. If they can’t find that within a standards based technology, they have to find it elsewhere. Spring is a good example of this. J2EE 1.4 was so bloated and complicated that enterprise application developers started looking for alternatives that allowed more flexibility and ease of development, testing, and deployment (i.e. don’t have to buy 5 books to understand how to use the technology). I feel that Java EE 5 could have been a viable alternative to Spring and seen wider adoption if it had been released in 2005. Since JEE5 took so long, Spring gained wide adoption and now is the defacto standard IMO.

So when see vendors complain about non-standard open source projects becoming the defacto standard, can’t help but think is their own fault. After reading this blog from Pierre Fricke of Red Hat, I wonder if the commercial vendor representative who spoke out about this even understood the reason why this trend was occurring.

I personally believe in standards and have been an expert member on several JSRs. However, if the members of the standards bodies do not start recognize that the developer community is less tolerant of lengthy delays in releasing and won’t accept specifications that don’t adequately meet their needs, the developer community has alternatives and will use them. For the commercial vendors that don’t want to ultimately loose out entirely, they may want to consider changing their approach to working on specifications.

Comments are closed.