Richard Searle has a blog post entitled
The SOAP retrieval anti-pattern
where he writes
I have seen systems that use SOAP based Web Services
only to implement data retrievals.
The concept is to provide a standardized mechanism for external systems to
retrieve data from some master system that controls the data of interest. This
has value in that it enforces a decoupling from the master systems data model.
It can also be easy to manage and control than the alternative to allowing the
consuming systems to directly query the master systems database tables.
...
The selection of a SOAP interface over a RESTful interface is also questionable. The SOAP interface has a few (generally one) parameters and then
returns a large object. Such an interface with a single parameter has a trivial
representation as a GET. A multi-parameter call can also
be trivial mapped if the parameters define a conceptual heirarchy (eg the ids of
a company and one of its employees).
Such a GET interface avoids all the complexities of
SOAP, WSDL, etc. AJAX and XForm
clients can trivially and directly use the interface. A browser can use XSLT to provide a human readable representation.
Performance can easily be boosted by interposing a web cache. Such
optimization would probably occur essentially automatically since any
significant site would already have caching. Such caching can be further
enhanced by using the HTTP header timestamps to compare
against the updated timestamps in the master system tables.
I agree 100%, web services that use SOAP solely for data retrieval are usually a
sign that the designers of the service need to get a clue when it comes to building
distributed applications for the Web.
PS: I realize that my employer has been guilty of this in the past. In fact, we've been known to do this at MSN as well although at least we also provided RESTful interfaces to the service in that instance. ;)