There is this broad-reaching debate that has been going on for months about remoting, Web services, Enterprise Services, DCOM and so forth. In short, it is a debate about the best technology to use when implementing client/server communication in .NET.
I've weighed in on this debate a few times with blog entries about Web services, trust boundaries and related concepts. I've also had discussions about these topics with various people such as Clemens Vasters, Juval Lowy, Ingo Rammer, Don Box and Michele Leroux Bustamante. (all the "experts" tend to speak at many of the same events, and get into these discussions on a regular basis)
It is very easy to provide a sound bite like "if you aren't doing Enterprise Services you are creating toys" or something to that effect. But that is a serious oversimplification of an important issue.
Because of this, I thought I'd give a try at summarizing my thoughts on the topic, since it comes up with Magenic's clients quite often as well.
Before we get into the article itself, I want to bring up a quote that I find instructive:
"The complexity is always in the interfaces" - Craig Andrie
Years ago I worked with Craig and this was almost like a mantra with him. And he was right. Within a small bit of code like a procedure, nothing is ever hard. But when that small bit of code needs to use or be used by other code, we have an interface. All of a sudden things become more complex. And when groups of code (objects or components) use or are used by other groups of code things are even more complex. And when we look at SOA we're talking about entire applications using or being used by other applications. Just think what this does to the complexity!
Terminology
I think a lot of the problem with the debate comes because of a lack of clear terminology. So here are the definitions I'll use in the rest of this article:
Term | Meaning |
Layer | A logical grouping of similar functionality within an application. Often layers are separate .NET assemblies, though this is not a requirement. |
Tier | A physical grouping of functionality within an application. There is a cross-process or cross-network boundary between tiers, providing physical isolation and separation between them. |
Application | A complete unit of software providing functionality within a problem domain. Applications are composed of layers, and may be separated into tiers. |
Service | A specific type of application interface that specifically allows other applications to access some or all of the functionality of the application exposing the service. Often this interface is in the form of XML. Often this XML interface follows the Web services specifications. |
I realize that these definitions may or may not match those used by others. The fact is that all of these terms are so overloaded that intelligent conversation is impossible without some type of definition/clarification. If you dislike these terms, please feel free to mentally mass-substitute them for your favorite overloaded term in the above table and throughout the remainder of this article :).
First, note that there are really only three entities here: applications, tiers and layers.
Second, note that services are just a type of interface that an application may expose. If an application only exposes a service interface, I suppose we could call the application itself a service, but I suggest that this only returns us to overloading terms for no benefit.
A corollary to the above points is that services don't provide functionality. Applications do. Services merely provide an access point for an application's functionality.
Finally, note that services are exposed for use by other applications, not other tiers or layers within a specific application. In other words, services don't create tiers, they create external interfaces to an application. Conversely, tiers don't create external interfaces, they are used exclusively within the context of an application.
Layers
In Chapter 1 of my .NET Business Objects books I spend a fair amount of time discussing the difference between physical and logical n-tier architecture. By using the layer and tier terminology perhaps I can summarize here more easily.
An application should always be architected as a set of layers. Typically these layers will include:
First, we are grouping similar application functionality together to provide for easier development, maintenance, reuse and readability.
Second, we are grouping application functionality such that external services (such as transactional support, or UI rendering) can be provided to specific parts of our code. Again, this makes development and maintenance easier, since (for example) our business logic code isn't contaminated by the complexity of transactional processing during data access operations. Reducing the amount of external technology used within each layer reduces the surface area of the API that a developer in that layer needs to learn.
In many cases each layer will be a separate assembly, or even a separate technology. For instance, the data access layer may be in its own DLL. The data management layer may be the JET database engine.
Tiers
Tiers represent a physical deployment scenario for parts of an application. A tier is isolated from other tiers by a process or network boundary. Keeping in mind that cross-process and cross-network communication is expensive, we must always pay special attention to any communication between tiers to make sure that it is efficient given these constraints. I find it useful to view tier boundaries as barriers. Communication through the barriers is expensive.
Specifically, communication between tiers must be (relatively) infrequent, and coarse-grained. In other words, send few requests between tiers, and make sure each request does a relatively large amount of work on the other side of the process/network barrier.
Layers and Tiers
It is important to understand the relationship between layers and tiers.
Layers are deployed onto tiers. A layer does not span tiers. In other words, there is never a case where part of a layer runs on one tier and part of the layer runs on another tier. If you think you have such a case, then you have two layers -- one running on each tier.
Due to the fact that layers are a discrete unit, we know that we can never have more tiers than layers. In other words, if we have n layers, then we have n or less tiers.
Note that thus far we have not specified that communication between layers must be efficient. Only communication between tiers is inherently expensive. Communication between layers could be very frequent and fine-grained.
However, notice also that tier boundaries are also layer boundaries. This means that some inter-layer communication does need to be designed to be infrequent and coarse-grained.
For all practical purposes we can only insert tiers between layers that have been designed for efficient communication. This means that it is not true that n layers can automatically be deployed on n tiers. In fact, the number of potential tiers is entirely dependant on the design of inter-layer communication.
This means we have to provide terminology for inter-layer communication:
Term | Meaning |
Fine-grained | Communication between layers involves the use of properties, methods, events, delegates, data binding and so forth. In other words, there’s a lot of communication, and each call between layers only does a little work. |
Coarse-grained | Communication between layers involves the use of a very few methods. Each method is designed to do a relatively large amount of work. |
If we have n layers, we have n-1 layer interfaces. Of those interfaces, some number m will be course-grained. This means that we can have at most m+1 tiers.
In most applications, the layer interface between the presentation and business logic layers is fine-grained. Microsoft has provided us with powerful data binding capabilities that are very hard to give up. This means that m is virtually never n-1, but rather starts at n-2.
In most modern applications, we use SQL Server or Oracle for data management. The result is that the layer interface between data access and data management is typically course-grained (using stored procedures).
I recommend making the layer interface between the business logic and data access also be course-grained. This provides for flexibility in placement of these layers into different tiers so we can achieve different levels of performance, scalability, fault-tolerance and security as required.
In a web environment, the presentation is really just the browser, and the actual UI code runs on the web server. Note that this is, by definition, two tiers - and thus two or more layers. The interaction between web presentation and web UI is coarse-grained, so this works.
In the end, this means we have some clearly defined potential tier boundaries that map directly to the course-grained layer interfaces in our design. These include:
Protocols and Hosts
Now that we have an idea how layers and tiers are related, let's consider this from another angle. Remember that layers are not only logical groupings of domain functionality, but also are grouped by technological dependency. This means that, when possible, all code required database transactions will be in the same layer (and thus the same tier). Likewise, all code consuming data binding will be in the same layer, and so forth.
The net result of this is that a layer must be deployed somewhere that the technological dependencies of that layer can be satisfied. Conversely, it means that layers that have few dependencies have few hard restrictions on deployment.
Given that tiers are physical constructs (as opposed to the logical nature of layers), we can bind technological capabilities to tiers. What we're doing in this case is defining a host for tiers, which in turn contain layers. In the final analysis, we're defining host environments in which layers of our application can run.
We also know that we have communication between tiers, which is really communication between layers. Communication occurs over specific protocols that provide appropriate functionality to meet our communication requirements. The requirements between different layers of our application may vary based on functionality, performance, scalability, security and so forth. For the purposes of this article, the word protocol is a high-level concept, encompassing technologies like DCOM, Remoting, etc.
It is important to note that the concept of a host and a protocol are different but interrelated. They are interrelated because some of our technological host options put restrictions on the protocols available.
In .NET there are three categorical types of host: Enterprise Services, IIS or custom. All three hosts can accommodate ServicedComponents, and the IIS and custom hosts can accommodate Services Without Components (SWC).
The following table illustrates the relationships:
Host | Protocols | Technologies |
Enterprise
Services | DCOM | Simple .NET assembly ServicedComponent |
IIS | Web services | Simple .NET assembly ServicedComponent |
Custom | Remoting | Simple .NET assembly ServicedComponent |
It's important to note that we can easily host ServicedComponent objects or Services Without Components in an IIS host, using Web services or Remoting as the communication protocol.
The all three hosts can host simple .NET assemblies. For IIS and Remoting this is a native capability. However, Enterprise Services can host normal .NET assemblies by having a ServicedComponent dynamically load .NET assemblies and invoke types in those assemblies. Using this technique it is possible to create a scenario where Enterprise Services can act as a generic host for .NET assemblies. I do this in my .NET Business Objects books for instance.
Hosts
What we're left with is a choice of three hosts. If we choose Enterprise Services as the host then we've implicitly chosen DCOM as our protocol. If we choose IIS as a host we can use Web services or Remoting, and also choose to use or not use the features of Enterprise Services. If we choose a custom host we can choose Web services, Remoting or DCOM as a protocol, and again we can choose to use or not use Enterprise Services features.
Whether you need to use specific Enterprise Services features is a whole topic unto itself. I have written some articles on the topic, the most broad-reaching of which is this one.
However, there are some things to consider beyond specific features (like distributed transactions, pooled objects, etc.). Specifically, we need to consider broader host issues like stability, scalability and manageability.
Of the three hosts, Enterprise Services (COM+) is the oldest and most mature. It stands to reason that it is probably the most stable and reliable.
The next oldest host is IIS, which we know is highly scalable and manageable, since it is used to run a great many web sites, some of which are very high volume.
Finally there's the custom host option. I generally recommend against this except in very specific situations, because writing and testing your own host is hard. Additionally, it is unlikely that you can match the reliability, stability and other attributes of Enterprise Services or IIS.
So do we choose Enterprise Services or IIS as a host? To some degree this depends on the protocol. Remember that Enterprise Services dictates DCOM as the protocol, which may or may not work for you.
Protocols
Our three primary protocols are DCOM, Web services and Remoting.
DCOM is the oldest, and offers some very nice security features. It is tightly integrated with Windows and with Enterprise Services and provides very good performance. By using Application Center Server you can implement server farms and get good scalability.
On the other hand, DCOM doesn't go through firewalls or other complex networking environments well at all. Additionally, DCOM requires COM registration of the server components onto your client machines. Between the networking complexity and the deployment nightmares, DCOM is often very unattractive.
However, as with all technologies it is very important to weigh the pros of performance, security and integration against the cons of complexity and deployment.
Web services is the most hyped of the technologies, and the one getting the most attention by key product teams within Microsoft. If you cut through the hype, it is still an attractive technology due to the ongoing work to enhance the technology with new features and capabilities.
The upside to Web services is that it is an open standard, and so is particularly attractive for application integration. However, that openness has very little meaning between layers or tiers of a single application. So we need to examine Web services using other criteria.
Web services is not high performance, or low bandwidth.
Web services use the XmlSerializer to convert objects to/from XML, and that serializer is extremely limited in its capabilities. To pass complex .NET types through Web services you'll need to manually use the BinaryFormatter and Base64 encode the byte stream. While achievable, it is a bit of a hack to do this.
However, by using WSE we can get good security and reliability features. Also Web services are strategic due to the focus on them by many vendors, most notably Microsoft.
Again, we need to evaluate the performance and feature limitations of Web services against the security, reliability and strategic direction of the technology. Additionally keeping in mind that hacks exist to overcome the worst of the feature limitations in the technology, allowing Web services to have similar functionality to DCOM or Remoting.
Finally we have Remoting. Remoting is a core .NET technology, and is very comparable to RMI in the Java space.
Remoting makes it very easy for us to pass complex .NET types across the network, either by reference (like DCOM) or by value. As such, it is the optimal choice if you want to easily interact with objects across the network in .NET.
On the other hand, Microsoft recommends against using Remoting across the network. Primarily this is because Remoting has no equivalent to WSE and so it is difficult to secure the communications channel. Additionally, because Microsoft's focus is on Web services, Remoting is not getting a whole lot of new features going forward. Thus, it is not a long-term strategic technology.
Again, we need to evaluate this technology be weighing its superior feature set for today against its lack of long-term strategic value. Personally I consider the long-term risk manageable assuming you are employing intelligent application designs that shield you from potential protocol changes.
This last point is important in any case. Consider that DCOM is also not strategic, so using it must be done with care. Also consider that Web services will undergo major changes when Indigo comes out. Again, shielding your code from specific implementations is of critical importance.
In the end, if you do your job well, you'll shield yourself from any of the three underlying protocols so you can more easily move to Indigo or something else in the future as needed. Thus, the long-term strategic detriment on DCOM and Remoting is minimized, as is the strategic strength of Web services.
So in the end what do you do? Choose intelligently.
For the vast majority of applications out there, I recommend against using physical tiers to start with. Use layers - gain maintainability and reuse. But don't use tiers. Tiers are complex, expensive and slow. Just say no.
But if you must use physical tiers, then for the vast majority of low to medium volume applications I tend to recommend using Remoting in an IIS host (with the Http channel and BinaryFormatter), potentially using Enterprise Services features like distributed transactions if needed.
For high volume applications you are probably best off using DCOM with an Enterprise Services host - even if you use no Enterprise Services features. Why? Because this combination is more than two times older than Web Services or Remoting and its strengths and limitations and foibles are well understood.
Note that I am not recommending the use of Web services for cross-tier communication. Maybe I'll change my view on this when Indigo comes out - assuming Indigo provides the features of Remoting with the performance of DCOM. But today it provides neither the features nor performance that make it compelling to me.
This article has been reprinted with the author's express permission.
To share your thoughts on this issue, visit the comments section of Rocky's blog.
About the Author
Rockford Lhotka is the author of numerous books, including the Expert One-on-One Visual Basic .NET & C# Business Objects books. He is a Microsoft Software Legend, Regional Director, MVP and INETA speaker. Rockford speaks at many conferences and user groups around the world and is a columnist for MSDN Online. Rockford is the Principal Technology Evangelist for Magenic Technologies, one of the nation's premiere Microsoft Gold Certified Partners dedicated to solving today's most challenging business problems using 100% Microsoft tools and technology.