Useful Explanation: "iBATIS, Hibernate, and JPA: Which is right for you?", Strange Conclusion

The JavaWorld's article "iBATIS, Hibernate, and JPA: Which is right for you?", explains really well the concepts behind iBatis and Hibernate. I'm not fully agreed with the explanation of JPA. JPA is nothing else then an abstraction layer, which encapsulates Hibernate, TopLink, openJPA or JPOX. You only have to rely on the API and not the SPI (Service Provider Interface). JPA is enough for most use cases, everything else can be usually solved with the proprietary extensions of the particular SPI/provider/"driver". From the "simplicity" point of view I would say the following:

  • JPA is the simplest solution. The amount of XML is minimal. It is dry. For the simplest example you have only to know two annotations: @Entity, @Id. See: EJB 3 Persistence (JPA) For Absolute Beginners - Or Create Read Update Delete (CRUD) in 2 Minutes And Two (library) Jars
  • Hibernate: I would differentiate, whether you are going to use Hibernate with the "classic" XML configuration, or the annotation driven approach. The first case is harder to maintain, the latter very similar to JPA.
  • iBatis is the most powerful, but not that simple. It comes with highest amount of XML-configuration, which has to be maintained during the whole lifecycle. The tool support is rather weak. It differs sematically from Hibernate and JPA already on the conceptual level; it is more comparable to DAOs, than ORM. Fetched objects are immediately detached, and have to be merged after every modification. This is very different to JPA and Hibernate.

Comparing JPA with Hibernate, is like comparing JDBC with e.g. native Java DB / Derby drivers. The first is standardized, the second one more powerful. In my opinion: you should always start with the standard. Standard means: more than one vendor / provider.

The article is still interestingand readworthy, I have only problems with the conclusion :-)

Java EE Fallacies feedback: Is transactional context not necessary for "traditional" JDBC access?

I got an interesting comment for the post: Fallacy 2: EJBs are too complex, POJOs are easier.
from Rob Bygrave. He mentions:
"...Transaction Isolation levels come into play and the need for a 'Persistence Context' is an ORM specific requirement and not necessary for more traditional JDBC access..."
Which is partly true. If you do not have a transactional JDBC-connection, you will get every time a new and independent ResultSet. So this could become a problem in case your application grows and you are no more the only developer. In that case the changes made in one connection will be not visible in another. So even working with plain ResultSet can cause some inconsistencies, because actions behind a facade are isolated from the DB-perspective which is more a bug than a feature.
You could ensure consistency using some "patterns"/approaches:

  •  In every use action/use case the DB is accessed only once or in read only mode.
  • Or: A ResultSet instance is shared in a common context and reused in one logical action (method)
  • Or: The transactions are sorted (you read first, then write)
  • Or: You are using a JTA-DataSource and not DTOs.
Working only with JDBC works fine, but the transformation between ResultSet and Data Transfer Objects can cause additional challenges. In that case you will create new copies of the same row in a logical transaction over and over again. It is not a problem in "one man show projects", but in bigger teams it can lead to inconsistencies.
Of course you could build a transactional cache, or just use JPA :-).
Even plain JDBC-access can become a challenge in bigger (amount of developers) projects... For master-data management, simple CRUD use cases etc. JDBC works well enough. JDBC 4.0 is even better - and from my perspective, it could kill some proprietary persistence frameworks, because of simplicity, power and ease of deployment (it is shipped with JDK 1.6).

Abstract factory or just a factory in Java

The explanation of the abstract factory pattern:

"A software design pattern, the Abstract Factory Pattern provides a way to encapsulate a group of individual factories that have a common theme. In normal usage, the client software would create a concrete implementation of the abstract factory and then use the generic interfaces to create the concrete objects that are part of the theme. The client does not know (nor care) about which concrete objects it gets from each of these internal factories since it uses only the generic interfaces of their products. This pattern separates the details of implementation of a set of objects from its general usage." (from wikipedia) suggest the introduction of an abstract factory, which is subclassed by concrete realization. Every concrete factory cares about the construction of a concrete product. A simple example would be:

/* * GUIFactory example */

public abstract class GUIFactory {

public static GUIFactory getFactory() {

int sys = readFromConfigFile("OS_TYPE");

if (sys == 0) {

return(new WinFactory());

} else {

return(new OSXFactory()); }


public abstract Button createButton();


class WinFactory extends GUIFactory {

public Button createButton() {

return(new WinButton());



class OSXFactory extends GUIFactory {

public Button createButton() {

return(new OSXButton());



(also borrowed from wikipedia :-)).

But in Java we could provide even simpler and more generic extensible solution:

public class GUIFactory {

public final static GUIFactory instance = null;

public static GUIFactory getFactory() {

if(instance == null)

 instance = new GUIFactory();

return instance;


public Button createButton() {

String fullyQualifiedProductName = readFromConfigFile("BUTTON_TYPE");

return (Button)Class.forName(fullyQualifiedName).newInstance();




In this case the factory is totally independent on the concrete product, which can be easily configured. Because of dynamic configuration the typesafety of the product can be only checked at the runtime - but you gain the flexibility. Instead of "home grown" implementation, also standalone IoC containers can be used to inject a product. In that case the factory belongs to that framework and do not have to be implemented in the scope of a project.

The Abstract Factory pattern is only interesting, in case the creation of different products is different (e.g. the constructor's signature or the way how the product are created  is different).

Exception chaining is evil (because of hidden dependencies) - the solution is:

This entry is inspired by pascal's comment. Hi stated "Note that it also raises the point that remote services (Session Beans) should never throw technology-bound exceptions such as JPA ones. One should implement his own hierarchy of exceptions and translate them accordingly, you wouldn't want to have to deploy the JPA and other JEE jars on the clients just to have the exception classes (ewww).", which is absolutely also my opinion.
But the decoupling can be (partially) achived in more simple way. Instead of stopping the exceptions chaining, the super-exception in your hierarchy should implement the or Serializable in a special way. Instead of serializing everything, only the payload in String/XML format should be passed across the network boundaries. In that case it is not needed for the client to have all the libraries of the chained exceptions in the classpath.
To implement this you have either implement the "secret" methods from the interface:

private void writeObject( out)
throws IOException
private void readObject( in)
throws IOException, ClassNotFoundException;

or implement the Externalizable interface and so the methods:

void writeExternal(ObjectOutput out)
throws IOException
void readExternal(ObjectInput in)
throws IOException,

In case you have an own hierarchy it is enough to implement the interface in the top-level exception.
With this strategy you can pass chained exceptions to the client. But it is not always possible to catch all "technology exceptions" in a session facade.
Sometimes exceptions do occur after the completions of the transaction (e.g. optimistic collision). In that case you still need the business delegate.

Fallacy 9: It is sufficient to ensure the functionality of a distributed application with unit- and integration-tests.

The next item from the Java EE Fallacies. Developers often ignore the fact, that most of the Java EE applications are distributed, concurrent applications, and are relatively happy with a green bar. To make things more exciting some developers love to mock everything which could cause problems (database, backend systems etc.) AND forget (often even do not want) to test the application again in real world production-near environment. Event with continuus integration like cruise control, contnuum etc. you only test your code in production-similar environment in a sequential way.
Because the advantage of Java EE is the ability to process concurrent users, it would be also interesting to test the application under load. In this case it is not so important to check the performance, it is more important to ensure the long term stability. The tests do not have to be realistic, the main motivation for such tests is to see the application's behavior under heavy load (night run). CPU, memory consumption, the pool size and request distribution among the cluster nodes should be stable. It is amazing how many, deadlocks, non-working XA-transactions, consistency issues, OutOfMemory, memory leaks, bottlenecks (synchronized methods) etc. you will see. Some of these issues can even need a complete refactoring of your business and UI-logic (e.g. moving from pessimistic to optimistic locks). To minimize the costs, a distributed application should be load-tested at least once a week. My observation: Java EE application are load tested only few weeks before production. Because there is no more time to fix the problems - no one really cares about the results...

Some useful tools:

  1. JMeter, Grinder, OpenSta(if possible LoadRunner). Load testing frameworks
  2. JunitPerf:  collection of JUnit test decorators used to measure the performance and scalability of functionality contained within existing JUnit tests.
  3. JConsole: part of Java SE 5. A JMX-monitoring tool. You can monitor easily the CPU, RAM, Threads etc.
  4. LoadRunner (commercial one), is expensive but also powerful.

So Unit-Tests, Mocks etc. are nice, but you can only see whether your application is really working under heavy load.

Some interesting patterns for design and implementation (TOP 5)

In my last entry about patterns I explained the fundamental concepts of (in my opinion) the building blocks of every architecture with patterns. The feedback for the entry was great, so I describe here some implementation level / design patterns.

  1. Observer: you will even find the first implementation in JDK 1.0 (just scan the java.util. Package). Observer (together with command) is the foundation of every event distribution implementation. Also MVC is based on observer. The basic idea: you are interesting in a state of an object. In case the state changes, you will be notified (Pull or Push strategies are possible). In Java EE observer does not work properly (the latency...), so it is a good idea to use e.g. JMS topics for the event distribution.
  2. Strategy: a cool name for a simple approach. The idea: make algorithms replacable. The solution: hide them behind an interface and make them so replaceable (often with Factories or IoC/DI). It's often used, without knowing it.
  3. Template: provide a basic but, unchangeable, behavior (sequence of method invocations) in a superclass for all subclasses. The idea is used in the Servlet-API: the HttpServlet overrides the service method and dispatches to the doGet, doPost etc. methods. If you would like to build another webframework (:-)) you have to extend from the HttpServlet and override the doGet and doPost methods. Very often the template methods are abstract, in the HttpServlet they are concrete, but throw exceptions
  4. Visitor: is used for the harder problems. You would like to walk through a node graph. Is often used to copy an object graph, generic algorithms or cache replications e.g. in the framework the advertisements (=pointer to services) are replicated using walker algorithms from peer to peer. It is not very easy to implement (just think about cycle-recognition).
  5. Chain Of Responsibility: Similar to visitor. The nodes are able to deny the execution and also pass the control to the next node in the chain. This pattern is often is used together with decorator to implement simple AOP (see servlet filters or Spring's AOPs). You can also build a simple Rule Engine with CoR and Command. In this case the rules are executed and decide to pass the control to the next node or not. I built a simple rule engine, which read excel spreadsheets and generete the chain or rules (see my last entry about excel and "MDA")
If you already know decorator, you will find the other patterns like: wrapper, delegate or proxy no more so exciting. In real world it is not a big challenge to use all of the patterns, but the opposite is true. The architect/designer or developer should restrict the number of patterns and set constraints for the usage and the combination of patterns. Less is more...

Most Important Patterns (Top 6)

Shortly a developer asked me, what are the most know-worthy patterns - in context of Java EE. My answer was:

  1. Facade: it decouples independent classes, and more important, decreases (makes the interface more coarse grained) the granularity. Session Facade from Java EE is a sample.
  2. Adapter: makes incompatible things compatible. Especially important in server programming, in case you have to talk to legacy backend systems like SAP, CICS or IMS. Business Delegate in J2EE 1.4 (catches RemoteExceptions and throws something else), DAOs or JCA Connectors are adapters.
  3. Decorator: Enhances an interface (in our case the Facade), with additional aspects (it is actually the beginning of AOP :-)). In Java EE we have the Servlet Filter, Interceptors or implicit decoration with transactions, state or security. In Java SE the whole package is a decorator.
  4. Interface (actually not a standard pattern): is needed for encapsulation, decoupling the clients from the interface realization, and so is the beginning of service orientation or SOA.
  5. Factory/Builder: encapsulates the creation of differnt implementations for a Java-Interface. Can be a part of a framework (see Spring or Java EE 5), or of project architecture.
  6. Command: provides a simple and stable interface (often one method with name like execute, go, run, actionPerformed etc). The implementation provides the behavior, having the only the interface, it is not possible to see what happens :-). Command is the foundation of JMS, the whole SOA (stands for Same Old Architecture :-)) and also the event handling of frameworks like swing (ActionListener), or Struts (Actions).
So a typical J2EE 1.4/Java 5 architecture consist of:
Business Delegate (Adapter + Factory) ------> Decorator (Transactions, Logging, Security) + Session Facade  (Facade), a bunch of POJOs or Session Beans, and some DAOs (Adapter) or connectors to the backends. Message Driven Beans (Command) are often used for batch processing.

It's amazing, but it is possible to build almost every (Java EE) architecture with these building blocks.

Excel Driven Architecture and Design with Java EE (EDAD) - beyond MDA :-)

UML 1.X and 2.0 are wonderful, but you will find only view domain experts, analysts and developers who really understand the stereotypes, tagged values and profiles. Without deep UML knowledge - your models become ambigous, verbose and so also useless for efficient communication between stakeholders. At the same time the "business" statekeholders love to use spreadsheet for modelling the business domain, business rules and sometimes also coarse grained components. I was several times challenged with the problem of transforming the excel spreadsheets into UML diagrams. But: Why not reuse this information directly? In one project I simple read the excel model, using JXL (the Horrible Spreadsheet Format, HSF were also possible) and genereated sourcecode skeleton with velocity on the fly. Reading Excel with JXL is actually very easy:

   Workbook workbook = Workbook.getWorkbook(new File(this.fileName));
   Sheet sheets[]= workbook.getSheets();
   Sheet model = null;
   for (Sheet sheet : sheets) {
     model = sheet;
   Cell cell = null;
   for(int row=START_ROW;row<model.getRows();row++){
    for(int column=0;column<model.getColumns();column++){
     cell = model.getCell(column, row);
In the long term view, velocity-templates can become unmaintainable, and expensive to extend. For more complex projects I used velocity as a bridge to generate platform independent models (PIM) first. After this step, the PIM was refined and transformed into the sourcecode. For this purpose I'd suggest the great openarchitectureware (oaw) tool from the eclipse GMT subproject. It allows the defiinition of metamodels and comes with a simple, but powerful template-language called XPand. There is also an eclipse plugin available for developing and maintaining the templates (it comes with syntax highlighting and auto-completion).

Using Excel as an input for the generation of the structure of an architecture is not such a bad idea. Now the analyst's document are almost executable :-).

I'm still looking for a way for transforming the architect's polished PowerPoint files into code, but it is very hard to interpret the animations :-)

Actually I submitted a session with the same title to a german conference - but it was rejected :-)

BPEL-J, SOA, EoD and maintenance - (as maintainable, as JSPs)

The driving force of the SOA hype is the increase of maintainability and ease of development. The dream: the combination of independent services to "composite" applications in a easy and fast way is in the practice not always possible. In real world the services are only in rare case compatible - even the parameters aren't. In the practice the parameters have perhaps the same name, but totally different semantics. To make such services compatible (during the "orchestration"), you need a kind of mapping. This becomes more hard in the XML-world to establish. The idea to solve the problem comes with the WS-BPELJ specification.This specification allows the coordination of different services (the definition of the flow)  and matching of parameters. It can be compared with a configurable state machine or controller (or facade). The funny story here: for more complex semantic translations - Java-Code can be used.  

An example from the (WS-BPELJ) spec:

<bpelj:snippet name="Calculate Total">
float subtotal = response.getSubtotal();
subtotal = subtotal * (1 – discount.getRate());
float taxes = subtotal * taxRate;
float total = subtotal + taxes;
// Prepare the text message to be sent in the next activity.
jmsMessage = p_inquiryTopic.getSession().createTextMessage(response);

The mixture of Java-Code and XML reminds me at the old JSP days. The JSPs became often unmaintainable, so taglibs, or structured frameworks like Struts were introduced. The main idea was simple: the seperation of presentation and business logic.

I'm only curious, whether the mixture of XML and Java is more maintainable, than HTML and Java :-).

Sometimes (Java) Fat Clients are great, but you are not allowed to use it (W3HI)

Having everything (presentation, businesslogic + persistence) in one address space or JVM can dramatically smplify the development and maintenance. The problem here: you have to call such an architecture "Fat Client", which can lead to long meetings, inefficient discussions and sometimes even the cancellation of the project etc. Using the name "Smart Client" is also critical: this name is already overused in the AJAX space - so we need something new, cool, but without the "Fat" part.
What about the name W3HI (Web 3.0 Highly Interactive Client)?
Sometimes you should use another name for an old technology - and all problems are gone :-)

Online Workshops
...the last 150 posts
...the last 10 comments