The REST architecture style includes several points of interest. One of them is a "uniform" or "fixed" API. There is a subtle issue with that. I don't really think that this concept is what REST is all about.
Save Our Architecture... Arguments
I guess there are many recurring arguments of REST vs SOAP for SOA. Before I get started, I'd like to make sure my view point on these arguments is clear. REST vs SOAP is mostly about ignorance based arguments. REST is an architecture style for implementing remote system interfaces. SOAP is about interoperability in those remote system interfaces. There have been some rather hilarious blogs, of late that describe some of the insanity that revolves around the "SOAP BOX" that WS-* proponents tote around. The RESTarians are having a lot of fun with it all.
Without getting into a big SOAP vs REST debate, I'd like to focus on some of the discussion that I've had with REST proponents about the "uniform API" of HTTP being an attribute of REST style architecture.
In many RPC systems, there is a uniform API utilized to invoke operations between systems. The "INVOKE" operation has a single parameter, the argument list, and a single return value, the operation's result.
One can argue that RMI/Jini, CORBA, sunrpc, COM/DCOM and others are really RESTful systems as well. However, there are more words and acronyms in the typical REST argument. So, it gets a bit heated to just use the "uniform API" mantra as a point of argument.
An interesting attribute of HTTP is that it doesn't "cause" an application to be RESTful. It merely (or usually) allows RESTful systems to be created by providing a structure with less freedom for mistakes in design. That is, if you understand what HTTP, the protocol specification, documents for the GET, POST and PUT operations, and you grasp, and adhere to the "state transfer" (not transport) mechanics, and you understand how to create URIs that are referring to resources, not "operations" (because GET, POST and PUT are the operations) most of the time, you can create a RESTful system.
There's More to REST (on) than HTTP
The REST arguments are about simplifying system interface needs, to make certain types of things easier to do. TCP was the initial go at this, years ago. Simple guaranteed stream of bytes transported from one place to another. The issue was, that "transport" requires recurring programming structure/support in each software system, to get to the point of having the ability to "transfer" something. That is, in order to transfer a known bit of information from one place to another, you need a bit more protocol to enumerate what's being transferred in the bits that TCP is transporting between systems.
Early protocols like SMTP and FTP were designed for transferring specific types of information from one point to another. What we see in those protocols, is the basic issue that HTTP has tried to solve. That is, how can we have one protocol to transfer "anything". A predominate issue with SMTP, is that there is no sender authentication. A recurring issue with FTP is that there is a need for binary vs text transfer indication by the user because FTP has no idea what you are sending, and the MSDOS vs UNIX line termination issue is still around.
HTTP provides (now) the ability to cause the user to authenticate for access. HTTP solves the text vs binary issue by saying that everything is "binary" and the Content-Type: header tells the user/application receiving the transferred "information" what type of data it is to deal with content formatting issues. MIME was already being used in SMTP to make attachments possible, so its use in HTTP wasn't anything new.
An old friend of mine, Ned Freed, was responsible for the development of MIME. I still remember the day he told us that E-Mail would never really be useful for anything! I have to give him a break on that. He had been burdened with BITNET which truly was not useful!
In the end, the fact that REST uses HTTP, is nothing particularly amazing. There are some really old design/architecture concepts involved in the design of HTTP, which have fallen out of years of experience with other systems, where similar concepts have existed for some time. The formalization of GET, POST and PUT (and the others which are less visible) operations in HTTP and the specification of those operations is really nothing much different than what was already possible with the combining of a few tools.
From my perspective, the important part was the creation of the single HTTP service to perform all the tasks in one process. The tying of the operations (GET, POST and PUT) with the URIs was similar to the SMTP "Mail From:" and "RCPT TO:" where a namespace had been created to indicate a particular resource (a users mailbox).
The Servlet mechanism in Java based HTTP servers (and similar scripting/control facilities in other HTTP servers) allows the 3 operations to be used in a compartmentalized way which creates the "resource" layer that the server reveals through the URIs that it accepts. This is just good OO design supporting the HTTP URI model.
Without all of these pieces together, HTTP would not be any more useful than a simple RPC scheme. You'd have to write the same amount of code. REST based architectures are really benefiting from many attributes of the HTTP implementations more than it is benefiting from the protocol itself.
These same architectural benefits are available in many Java based platforms which have utilized good OO principles and which provide container semantics similar to the servlet model where Java's dynamic code loading facilities make it easy to plug in code at runtime.
1) SOAP may have been intended as an interoperability mechanism but in practice it is an obstacle to interoperability. Vendors implement the spec differently and chaos ensues. A single (if not simple) example:
2) The analogy between SOAP envelopes and IP headers is a false analogy. It results in the SOAP server having to open and parse payloads in order to do content-based routing. This is wrong from every standpoint: architecture, semantics, security, decoupling, and separation of concerns.
Not "more tightly", but with a better partitioning of responsibility between the transport and the endpoints.
No, I am not aware of a comprehensive published alternative to SOAP; I am pointing out the need for such. REST is one layer of the solution. Java, or any other platform, has nothing to do with it. HTTP and XML are the keys to interoperability. Any layering of aspect protocols on top of HTTP requires payload-level standards, based on some kind of envelope-letter pattern, but the problem with the way SOAP implements that pattern is the SOAP server and its responsibility for content-based routing. Content-based routing is a bad semantic, because it makes the wrong actor do the work and share the knowledge. Meta-addressing, implemented within the transport layer, is the correct approach.
Think of it in terms of the W3C Web Architecture: what are the resources? The services themselves are the resources. A commonplace error is to (in effect) model service providers (i.e., applications) as resources. Still worse in that direction is the SOAP model, in which the SOAP server itself is the only resource. In its purest form, there is only one SOAP server per domain, concentrating an entire enterprise's worth of semantics behind a single gatekeeper, which inevitably has to have all knowledge and all power. This is not the right direction.
We still await the future payload-level standards that will embody the assumption that the Web architecture has already done its job.
I'm fairly new to SOAP especially as it may be used in the world of commercial servers but I wonder if the standard is being blamed for the implementation decisons of a few vendors?
> the problem with the way SOAP implements that (envelope-letter) pattern is the > SOAP server and its responsibility for content-based routing. http://www.w3.org/TR/soap12-part0/ says SOAP is silent on the semantics of any application-specific data it conveys, as it is on issues such as the routing of SOAP messages, reliable data transfer, firewall traversal, etc.
> in terms of the W3C Web Architecture: what are > the resources? The services themselves are the resources. > A commonplace error is to (in effect) model service > providers (i.e., applications) as resources.
I presume you have seen (poor) implementations modeling applications as resources but I don't think it is a good idea to describe such instances as service providers because service provider is the common term used in describing SOA.
> In its purest form, there is only one SOAP server per domain, > concentrating an entire enterprise's worth of semantics > behind a single gatekeeper
I agree that sounds like a horrendous architecture. Can you point to any references saying that is part of the SOAP standard, or where it is in common practice?