Tuesday, August 18, 2009

XML in IoC containers: A hell or a realm?

While the essence of Inversion-of-Control (IoC) container appears to be easily comprehensible and well understood, it is subtly confusable and badly misread. One of the biggest confusion is the use of XML. Namely, application composite wiring/configurations are programmed as XML documents (or textual strings). This practice in IoC containers, notoriously known as the "XML Hell", draws various FUD criticisms, such as:
  • These XML codes are merely verbose and lousy procedural scripts.
  • These XML codes are poorly debuggable.
  • These XML codes lack static type safety.
  • These XML codes are refactoring-adverse.
As to be evidenced through the following discussions, these FUD claims not only completely missed the point but also turned out to bring up examples that affirmed the strength and significance of this XML programming realm instead.

Are these XML codes verbose and lousy procedural scripts?
First of all, as pointed out in the article IoC containers other than the DI pattern, these XML codes are not scripts that verbalize the procedures to produce the required configurations. Instead, they are models that visualize the required composite layout themselves (or requirements and constraints). This is just like the distinction between the assembly instruction book of a vehicle and its design blueprint. Hence, these XML codes are meant to be schematically intuitive and self-documenting rather than linguistically fluent and concise.

Secondly, also pointed out by the article IoC containers other than the DI pattern, these XML codes are intended to serve as mediating data worksheets as well. These data worksheets are going to be exchanged between IoC containers and independent applications/plugins that provide value-added features, such as model generation, editing, displaying, transformation, verification, storing, and querying. Therefore, these XML codes are desired to be programmatically manipulable as first class objects to heterogeneous programs rather than grammatically readable as natural language sentences to English programmers.

It is widely thought that such a data modeling issue was no more than another nail for the universally effective golden hammer - procedural languages (such as Java). A chief scientist of a Java consulting firm once even made an assertion: "Everything that can be done with xml should be doable from a java program". Consequently, various kludges have been invented and enthusiastically practiced to eliminate XML for data modeling, for instance, by expressing data composite model layouts in procedural method chaining layouts (i.e. the so-called fluent API coding style) or Java annotated data type layouts (i.e. metadata). Although claimed as substitutes or killers of their XML counterpart, it is not hard to see that these clever kludges are merely poorman's mimics of declarative data description languages. Not only they lack language level support of formal and intuitive schema definitions and structure/content integrity constraints, but also their codes are hard to be manipulated as first class objects. These "using procedural language for data modeling" kludges appear to have certain claimed advantages (such as debuggable, static type safety, refactoring-friendly) over XML, they missed the fundamental points of both the procedural programming and the data modeling paradigms in general.

Are these XML codes poorly debuggable?
Here, the "debug" specifically means stepping and tracing through the runtime executions of procedure codes at source level. When comparing to IoC containers, XML-eliminated DI frameworks frequently emphasize their advantage (as a killer feature) of being able to debug user authored application assembling/configuring codes written in procedural languages. Nevertheless, relying on such stepping/tracing debug simply indicates the following two facts of these DI wiring programs and their languages:
  • these programs consist user designed procedural code and user defined mutable states/variables
  • these programs are written in languages that lack of statically verifiable data schema support.
On the contrary, user authored XML codes in IoC containers are not procedural scripts but data descriptions that model the required composite arrangements with well defined schemata and statically verifiable integrity constraints. In general, code orders in these XML programs do not imply the actual step-by-step imperative execution orders of the underlying plumbing operations performed transparently by IoC containers. Hence, just like freeing from runtime debugging syntax errors, static type errors, and memory pointer arithmetic faults is an advantage of the Java environment, relieving the burden of runtime debugging structure/content constraint violations and IoC plumbing procedures is the very point and the strength of IoC containers and the data description language like XML!

Are these XML codes unable to feature static type safety?
This FUD criticism is specific to the static type safety. This is because that the dynamic type safety is already supported in IoC containers through comparing reflection retrieved metadata informations against operation signatures referred in these XML codes. For static type safety, component type informations are supposed to be retrieved from POJO (and/or POCO) component source code. Apparently, this was considered by this FUD to be not possible or significantly difficult.

However, firstly, it is obvious that what should be blamed here is not these XML codes but those POJO/POCO source codes written in classic procedural languages. Namely, the difficulty (if any) is not on inspecting operation signatures referred in these XML code but on retrieving type informations from those POJO/POCO source code. In another word, this issue actually reveals an advantage of these XML codes over classic procedural language code on being explicitly manipulable as first class objects.

Secondly, various "static reflection" engines for these procedural languages, such as Eclipse CDT C++ DOM/AST, JDT Java DOM/AST (since June 2004, Eclipse 3.0), GCC-XML (since 2002), Eclipse JDT JDOM (as early as 2002, for Eclipse 1.x/2.x) have long and widely been available. With these engines, type information "static reflection" from POCO/POJO component source code is as simple as dynamic reflection from compiled metadata. Therefore, static type safety check for these XML codes is as easy as the widely supported dynamic type safety for IoC containers.

Thirdly, this FUD criticism itself is merely a red herring to mislead readers to believe that, without static type safety, these XML codes would imply significant overhead (claimed to be 10 times slower) and be error prone at runtime. Nevertheless, in IoC containers, dynamic type check on each signature in these XML codes needs only be performed once (or very few times). Hence, the real performance overhead from dynamic type check is negligible for almost all real world applications that use decent IoC containers. More importantly, in IoC/DI scenario, operation signatures referred in these XML codes are resolved (and checked) by IoC container at applications (especially test applications) initialization or reinitialization phase. Hence, these XML codes are no longer involved in the executions of the applications once they passed this initialization/reinitialization phase, let along to cause runtime type errors in production applications. Certainly, a start-up time type checking that requires a start-up test is still different from source code editing or compiling time check. However, arguing such a start-up type check was practically unacceptable would be self-contradictory to the previous argument that emphasized the advantage of runtime debuggability which implies more than starting the applications.

Fourthly, with regarding to static code check, this FUD criticism inadvertently brought up a fundamental weakness of procedural languages and a significant strength of the XML paradigm. Namely, grammar (or syntax structure) and static type safety check of procedural languages tend to be too primitive and too generic to ensure user authored procedures are free from runtime errors or wrong runtime results without errors. This is why procedural languages and their XML-eliminated DI frameworks emphasize runtime debug and sufficient runtime test coverage. On the contrary, in IoC containers, these XML codes do not specify the runtime procedures but explicitly describe the required results and, therefore, are not only inherently free from runtime errors caused by procedural bugs in user codes but also enable much sophisticated and problem/domain specific schema verifications to statically diagnose errors in user described requirements or even results themselves.

Are these XML codes refactoring-adverse?
This FUD claim can be put as: "POCO/POJO function signatures are expressed as literal strings in these XML codes. Because literal string in C++/Java code do not participate in code refactoring, hence, these XML code are not able to participate in component interface refactorings either." The erroneousness of this FUD claim is fail to understand that having a pair of quote characters surround a string text is merely an appearance rather than the cause of "refactoring-adverseness". In fact, the quote character pairs surrounding string texts are removed at lexical analysis stage even before these strings being added to the abstract syntax trees (AST) internally used by compilers or refactoring engines. The "refactoring-adverseness" of these strings is not due to the quote character pairs surrounding them but simply because the literals strings themselves in C++ or Java languages have no language level connections to names/signatures of variables, interfaces/classes, and/or functions/methods known by refactoring engines.

On the contrary, the mapping from the attribute values in form of quoted strings in these XML codes to POCO/POJO interface/class type names and function/method signatures are well defined. Otherwise, the underlying IoC engines would not even be able to resolve invocation methods through reflection in the first place. Hence, having refactoring on these XML codes is straightforward, and having them participate in POCO/POJO component code refactoring is not more difficult than (if not significantly easier than) the original work of supporting refactoring on C++/Java code themselves (would C++/Java be criticized as being refactoring-adverse as well?), not to mention this integration approach has largely been simplified by the open architectures of existing refactoring IDEs (such as the Eclipse Language Toolkit (LTK) refactoring architecture) and the Eclipse CDT C++ DOM/AST and JDT Java DOM/AST "static reflection" engines.

Furthermore, the "refactoring" in this FUD criticism obviously meant the interactive refactorings made in an IDE environment. As pointed out above, it is possible to have these XML codes participate in such an interactive refactoring of POCO/POJO. However, it would only make sense if the XML codes and the POCO/POJO component/service implementation codes were developed together within the same IDE project. Unlike programmatic API frameworks (as well as those XML-eliminated DI frameworks) which consider wiring configurations an statically built-in integral part developed together with and spread throughout their applications, IoC containers have assembly/deployment arrangement code completely separated from POCO/POJO component/service implementations. Namely, these arrangement code tend to be authored independently outside of (or decoupled from) the development phase(s)/cycle(s), IDE project(s), and/or brain(s)/team(s)/vendor(s) of those components/services. Hence, having these XML code participate in an interactive refactoring of those POCO/POJO code is largely a phantom requirement from misreading or misusing of IoC containers.

Tuesday, August 11, 2009

Inversion of Control Containers vs the Dependency Injection patter

IoC containers are not about a design pattern

While the essence of
Inversion-of-Control container appears to be easily comprehensible and well understood, it is subtly confusable and badly misread. This paradox is reflected from various seemingly evident and widely taken for granted but misleading interpretations, such as the following two well known assertions on what these containers are about:
  • Inversion-of-Control (IoC) is simply the Dependency-Injection (DI) design pattern for loose coupling between business logic implementations.
  • IoC containers (hence, tend to be referred to as DI frameworks) are programming frameworks primarily to facilitate the DI design pattern.
Misled by these interpretations, many readers disappointingly concluded that IoC containers were merely about a trivial design pattern that not only was doable by hand in a few lines of code without a container but also had already been a plain old practice on the street for years before it became a buzzword hype. These stereotypes missed the substances for the superficies (such as IoC/DI type 1,2,3, reflection, and design patterns). Consequently, the significances and implications of IoC containers have largely been neglected, as one can tell from the sigh of some DI folks: “I was expecting a paradigm shift, and all I got was a lousy constructor”.

IoC containers emerged as a mainstream solution by successfully challenged the dominance of their predecessor(s), the old EJB (2.x), that was based on the service locator design pattern. If IoC containers were merely to facilitate a plain old hand doable design pattern for loose couplings between components, then what made them superior to their predecessor(s)? Was the EJB's service locator design the cause of tight couplings between components? Was the service locator design pattern not doable by hand (if this was counted as an advantage)? Was the core engine of old EJB containers primarily to facilitate the service locator design pattern? Or, was the change from service locator to IoC/DI any paradigm shift (if we were expecting any)?

Firstly, neither the EJB's service locator design incurs tight couplings between components, nor the IoC/DI design relatively loosens such couplings if any! The objective and strength of IoC containers are orthogonal to the attempt of reducing the component-component (or component-service) couplings. Rather, they are superior to their EJB predecessor(s) by completely eliminating component-container couplings. As an example: if A and B are two components (either concrete instances or abstract interfaces) and C is an IoC container that injects B's reference into A, then, contrary to the popular misinterpretation, the injection here has nothing to do with loose coupling between A and B but serves to completely decouple A (and B as well) from the C. Here, in this example, the C itself is neither being injected with the reference of A or B nor has its reference been injected into A or B.

Secondly, neither the service locator design was hard to do by hand, nor the core engine of old EJB was to facilitate this design pattern. Similarly, although appearing as a core design pattern in IoC containers,
as Stefano Mazzocchi once pointed out, IoC (or DI) is not what IoC containers focus to do but what they merely use. In fact, it is not IoC containers facilitate the IoC/DI design but this design serves the containers. In another word, the essence of IoC containers is not "IoC" but the "container". It is just like the essence of "electric cars" is not "electricity" but the "car". It would miss the point to say electricity could be generated without cars, and worse, to conclude electric cars were merely to facilitate (i.e. to generate) electricity.

Thirdly, changing from the old EJB's service locator design to the new IoC/DI design makes component containers seamless and neutral/non-invasive. However, this design change neither unveils a new paradigm nor alters the primary objective of the containers. Instead, it is this inherited objective itself, to be discussed further in this article,
implies mindset and paradigm changes.
IoC containers != DI frameworks

As being pointed out above,
IoC containers are neither about loose couplings between components nor meant to facilitate the DI design pattern (or the Dependency Inversion Principle). However, many programming libraries known as DI frameworks (and the JSR 299) are precisely to facilitate this pattern. Although these DI frameworks are orthogonal and even opposite to IoC containers on focus, objective, and philosophy, they are constantly claimed to be substitutes or even killers of IoC containers. To avoid further confusions and meaningless apple to orange comparisons, it is now necessary to differentiate IoC containers from these DI frameworks.

DI frameworks are programming libraries with the following objectives, focus, and characters:

  • They are primarily to facilitate the DI design pattern in wiring loosely coupled business logic implementation modules (objects) into applications.
  • They focus on providing programmatic APIs (e.g. templates, annotations, and binder classes/functions) for DI configuration authoring in the same implementation languages of their business logic modules.
  • They favor DI annotations tightly coupled with business/service logic modules and consider DI configurations a statically built-in integral part of their applications.

Hence, DI frameworks emphasize the advantages of the programmatic APIs for DI configuration programming. This reflects the belief of
"the most powerful way to 'configure' something is to write the code that produces what you want. Code is precise. Code is good." In DI frameworks, these configuration "codes" are distinct from the "data". They are either step-through debuggable and testable procedural programs tightly integrated with their applications or binary metadata structures/templates and annotations to be read by DI frameworks through runtime reflection.

Quite differently, IoC containers are a category of assembly and deployment engines for component-based applications (namely, applications built from prefabricated coarse-grained high-level application or service function modules) with the following characters and objectives:
  • They use the IoC design to non-invasively control container-agnostic components (e.g. plain old C++ or Java objects/service interfaces) in assembling and deploying applications.
  • They are primarily to support the "code is model, code is data" paradigm for composite authoring, where:
    • program code explicitly model the required composite arrangements rather than the procedures that produce such arrangements (or their plumbing contexts, builders, or binders),
    • program code are not written in the same implementation languages of their applications but in data representations neutral to these languages.
  • They clearly separate these composite arrangement model/data representations from their components and applications, and have them (composite arrangement representations) manipulable as first class data objects independently.
Hence, in the vision of IoC containers, an application is a composite of binary components and external services. Such a composite is declarative programmed by modeling it in a first class data representation. This is "code is model, code is data" programming paradigm is what IoC containers (and their predecessors) are primarily about, why they are useful, and where they are fundamentally different from plumbing by hand (or facilitated by DI frameworks) in classic OO languages. Therefore, it is this programming paradigm, rather than an OO design pattern, the very essence of IoC containers.

By "code is mode", IoC containers consider "a cost effective, intuitive, and maintainable way to configure (assemble and deploy) applications is to model what you want rather than to write the code of how to produce them". Therefore, in IoC containers, user codes are self-documenting models that visualize user requirements (i.e. what users want) rather than verbalize user solutions (i.e. how to produce the configurations). In general, code orders in these IoC programs do not imply the actual step-by-step imperative execution orders of the underlying plumbing operations performed transparently by IoC containers. This enables more strong static code verifications and frees users from the concerns of IoC procedural bugs and the burdens of runtime debugging.

By "code is data", these composite modeling codes are not merely or necessary user manual programs but, most importantly, able to serve as language neutral mediating data worksheets to be handed from or exchanged between independent modeling applications or plugins in forms of internal data objects, online messages, as well as external off-line documents or records. Comparing to the binary immutable metadata/annotation/API driven tight integration of DI frameworks, this data driven approach enables a much flexible while safe integration between IoC containers and third party modeling applications or plugins that generate, edit, display, transform, merge, refactor, compare, analyze, verify, and query these modeling codes at primitive POJO/POCO component API levels as well as at various domain specific modeling levels.

Certainly, this classification does not exclude the existence and usefulness of hybrid solutions sitting between these two, for instance, configuration modules are separated from their applications but coded as metadata of the same implementation languages and even manipulate these metadata at the source code syntax level.

Tuesday, March 25, 2008

Inversion of Control vs Strategy Pattern

The Inversion of Control (IoC), also known as Dependent Injection (DI) , is orthogonal to the Strategy Pattern. Saying that they were the same pattern would be similar to saying that the Von Neumann Architecture and integrated circuit (IC) were the same thing.

The strategy pattern is one of many object oriented partitioning designs. It suggests how to divide business logics into separated components. Inversion of control (IoC), on the other hand, is one of many application wiring, configuring, and lifecycle controlling scenarios. It is about how to put separated business compnonents together into applications.

Application components partitioned in strategy pattern can be wired, configured, and controlled in various scenarios, not have to be the IoC. For instance, it is pretty common that applications use policy registries (therefore, a directory lookup scenario) to dynamically add, remove, resolve, and swap policy (strategy or algorithm) implementations.

Similarly, IoC has been widely used to assemble applications that are partitioned into components in various scnearios beyond the single strategy pattern. For instance, IoC are used to wire event suppliers/consumers, clients/services, implementations/adapters, etc..

Sunday, February 24, 2008

Dynamic Proxies in PocoCapsule

In the pococapsule newsgroup, someone asked the following question about dynamic invocation proxies in PocoCapsule:

As far as i understand, these proxies are dependent on the configuration file (e.g. setup.xml).... I would think that the need for recompiling the proxy when the configuration file changes is a "big disadvantage". One of the great things of IoC is that the system can be configured by "just" changing the configuration file. But now users also have to recompile reflextion proxies (?). ... Can pxgenproxy generate reflextion code for ALL classes in the specified .h file(s)?

I would like to use this opportunity to clarify several related issues:

First of all, with PocoCapsule, one does not need to recompile dynamic proxies under configuration changes that only modified function invocation parameter values. One only needs to rebuild new proxies for new function invocation signatures involved in the new configuration. In my opinion, this is not only desirable (no recompilation on parameter value changes) but also acceptable (build new dynamic proxies for new signatures). The assumption is that real world application configurations should avoid applying new invocation signatures that have never been tested before. This kind of usage scenario automatically avoids the need of an on-the-field recompilation after a reconfiguration. Because all dynamic proxies to be used by the application on field should have been generated and compiled for deploying the application during QA tests.

Secondly, many popular IoC containers today even favor static solutions over dynamic configurations. In these manual programmatic based (such as the PicoContainer without the Nano) or metadata based (such as the Spring Framework with Spring-Annotation) solutions, not only new function signatures but even parameter value changes in configurations would force recompilations. Although I am not keen on these solutions (especially the recompilation under value change is highly undesirable in my opinion), I do believe that these well accepted and even enthusiastically pursued solutions indicate that such a recompilation does not bother real world applications.

This is not because the industry does not recoganize the claimed "disadvantage" of the on-the-field recompilation implied in these hot solutions, but because IoC frameworks are not intended to be yet another scripting environment. Rather, IoC frameworks are mainly for:
  • Separating plumbing logic (component life cycle controls wirings, initial property settings etc.) from business logic and supporting framework agnostic business logic components.
  • Allowing user to setup (configure/deployment/etc.) an application declaratively (by expressing what it is alike, rather than the procedure of how to build it step-by-step),
  • Supporting the idea of software product lines (SPL) based on reusable and quickly reconfigurable components and domain-specific modeling.
Whether a given IoC framework is able to avoid on-the-field recompilation when new signatures appearing in the declarative configuration descriptions is merely a bells-and-whistles feature rather than a "great thing" (neither a "big disadvantage" if it does not support it). In PocoCapsule, generated dynamic proxies are very small and cost neligable recompilation time for most applications, not to mention that:
  • On-the-field recompilation can largely be avoided if component deployments have been pre-tested (as discussed in the beginning).
  • This recompilation need even less time than packaging deployment descriptors (such as package them into .war/.ear/.zip files).
Now, let's take a look on those seemingly "minor" disadvantages from the suggested solution that generates proxies for all classes in specified header files:
  • More manually code fixes: I would suggest one to play some of relevant utilities, such as GCC XML, on various header files on different platforms (including Windows, various unix/linux, VxWorks, Symbain OS, etc. Because IoC frameworks do not (and should not) prohibit user to use non-portable components, the utilities that parsing header files would have to either deal with non-portable header files (including various platform specific C++ extensions) or require users to fix those header files manually before parsing. In the suggested scenario, the developers who were only going to configure the application at high level would have to apply more low level code fix effort.
  • Bloated code generation and heavy runtime footprint: Based on various application examples, we compared PocoCapsule generated proxies code to CERN REFLEX that generate all proxies of classes in specified header files. Typically, one would see REFLEX produces 10 to 1,000 times more code than it was actually needed for an IoC configuration. These redundent code eat megas of runtime memory (instead of few or few tens kilos). This is because in the suggested solution, one would have to generate proxies for all classes that were implicitly included (declared in the other header files that were include by specified header files), proxies for all classes that were used as parent classes of some other classes, and proxies that are used as parameters of class methods, etc.. Otherwise, it would be merely 50 yards vs 100 yards, namely, one would still have the claimed "big" disadvantage of having to rebuild proxies after all.
  • Human involved manually edited filters: Utilities, such as GCC XML (and therefore the CERN REFLEX), allows one to filter out unwanted proxies to reduce the size of generated code. However, one would have to manually edit filter configurations. The consequence of applying such filters are more code (or script) and more complexities to be handled and mantained manually. This immediately defeats the whole point of using the IoC frameworks.
  • Additional footprint for a runtime type system: To support OO polymorphism (e.g. components that extend from interfaces) without recompilation, simply generating all proxies is not sufficient. The solution would have to provide a runtime type system (and additional metadata as well). This will increase the application runtime footprint roughly by another ~1Mbytes.
  • Generic programming (GP) would be prohibited: As we know, C++ template specialization mandates recompilation. We can't have compiled code that could be applied to all possible specializations. To ensure no recompilation, the solution would have to accept a "minor" disadvantage, namely prohibit the use of GP. GP is used heavily in many PocoCapsule examples (see those corba examples that use "native" servant implemention). It significantly shortens the learning curve of some middlewares, such as CORBA (one no longer need to learn POA skeletons), simplifies application code, and supports legacy components with much less refactoring cost.
With all these disadvantages what one would gain was a merely an "advantage" that could help him to shoot himself in the foot -- to deploy an application that involves wirings that have never been tested previously.

Also see the wiki article: Reflection from Projection

Tuesday, November 13, 2007

DSM in IoC frameworks

[this is largely an article I posted on theservierside.com: here]

XML application descriptors (configurations) in core schemas of various IoC frameworks are more or less plain Java or C++ (or other languages) method invocations expressed in XML. Such a low level schema has the advantages of being compact, straightforward, and applicable to general applications. However, it has weaknesses of being poorly expressive and involving low level programming signatures (APIs). Therefore, IoC core schemas are verbose, error-prone, and not desirable for domain users.

One of the approaches to address these issues is to raise the abstracion level of XML configurations by supporting user-defined domain specific modeling/language (DSM/DSL) schemas in IoC frameworks. Spring 2.0 introduced the so-called extensible XML authoring (Spring 2.0 appendix B) allowing users to extend the core schema by user-implemented plug-in handlers. These manually crafted handlers process XML DOM elements that are defined by users to extend the core schema. This scenario has the disadvantages of involving low level XML DOM programming and tying to proprietary (Spring 2.0) callback interface API. Therefore, it is not suitable for domain users, not able to be automated/tooled, and not likely to be followed by other IoC containers.

The article Domain Specific Modeling in IoC frameworks presents another straightforward alternative based on the concept of model transformation. The idea here is simply leverage the IoC framework itself and the ubiquitous W3C XSLT technique without involving any proprietary plug-in API or low level XML DOM. To define a DSM, one only needs to define its XML schema and then design a XSLT stylesheet that maps a configuration in this DSM schema to a configuration in a target schema (such as the core schema). To use this new DSM schema, a XML configuration (in this DSM schema) only needs to have the stylesheet file name (or URL) specified in its process instruction (PI) section. With a XSLT transformer integrated IoC container, this XML configuration will be recursively transformed until a final target configuration that does not have such a transformation process-instruction (presumbly, it ends up with the core schema).

This model-transformation based DSM scenario has already been supported in the PocoCapsule/C++ IoC and DSM Framework and is straightforward to be applied to most other IoC containers. The PocoCapsule also supports the so-called higher order transformations (HOTs). Namely, this model transformation scenario of user-defined high level DSM is applicable not only to application configurations but also to the transformation stylesheets themselves. Therefore, one can design his/her own domain specific transformation (DST) languages and use them, instead of the XSLT, to design application DSM transformations.

With this model transformation DSM scenario, an IoC framework can be used as a framework to build other user-defined or committee-design component frameworks. Several such frameworks are presented out-of-the-box in PocoCapsule and with numerous examples. For instance, a SCA assembly model can be built as a DSM in merely 500 lines of code (XSL and C++) instead of several thousands lines of code and months of effort.

As argued in my article, with this DSM scenario in IoC frameworks, disadvantages of XML configurations of core IoC schemas can largely be avoided, while advantages of their declarativeness, self-documenting, schema validations, and easy manipulation start to become significant.


SCA considered harmful!

[Additional discussions: here and here]
[Full scale examples (bigbank, calculator etc.): here]

Advocates of the service-component-architecture (SCA) always like to emphasize its strength of multi language support whenever being compared to other competing java-centric alternatives (such as JBI). Therefore, serious issues of SCA C++ mapping naturally raise the concerns on the soundness of the entire SCA, unless one was willing to admit that SCA was also a Java-only marketecture that came with some poorly designed non-java language mappings pretty much as bells and whistles.

As I summarized in this and other articles, there are numerous fatel problems in the SCA C++ mapping designed by the committee. For instance:

1. The SCA C++ mapping is dangerously type unsafe: The SCA assembly model heavily relies on the getService() method of component contexts. However, this method returns service objects as opaque pointers (void*). Business logic implementations suppose to typecast these opaque pointers back to their interface class types declared in .componentType side files. This kludge is equivalent to a C-style cast or a C++ reinterpret_cast without using any type validation system. Generally speaking, this is considered to be a dangerous code smell by C/C++ professionals. Firstly, such a type cast will not work correctly in case of multiple inheritance. Secondly, it is very error-prone in case of component class type changes. The mismatch between the casted types in application code and the actual types declared in the .componentType side files is neither reportable at compile time nor detectable at runtime.

2. Tight coupling to container programming model: The dependency lookup design of SCA tightly couples implementations to the underlying container. It largely prevents SCA components to be used in foreign or legacy runtime environments and vice versa.

3. Tight coupling to container thread model: on calling ComponentContext::getCurrent() within a component implementation, SCA assumes the context of the calling component is on the thread local storage. This assumption tightly ties the component wiring mechanism with the thread model of the underlying request-dispatch engine, significantly increases the cost of SCA container implementations on existing WebServices containers (or SOAP stacks), prohibits many useful application designes, and makes testing and debuging of SCA components more difficult.

With these flaws and many others listed in the table here, SCA (especially its C++ model) would only make simple things hard, and make complex things impossible.

I am not suggesting we should throw the baby out with the bath water. In a violent agreement, some alternative solutions are suggested in my article . These solutions are based on the inversion-of-control (IoC) and domain-specific-modeling (DSM). One of the solutions (here) even supports the exact SCA assembly model (SCDL). However, it replaces the SCA proprietary programming model with the so-called Plain-Old C++ Objects (POCOs). This design largely avoids those contrived vendor-lock-in kludges designed by the SCA committee and significantly simplifies both the container implementation and applications built on top of it. See a list of examples (including a full scale bigbank example, see the following diagram) for WebServices and SCA based on this design.