I’ve used Hibernate in the past with EJB3 annotations. Frequently I needed to map query results that didn’t correspond to Entity objects to value objects that could then be passed to the business or view tier. This is fairly trivial using the Hibernate Query API’s:


 public List<CustomerCityInfo> getCustomerCityInfo()
 {
     Query query = this.sessionFactory.getCurrentSession().createQuery("SELECT c.name, c.address.city from Customer c");
     query.setResultTransformer(Transformers.aliasToBean(CustomerCityInfo.class));
     return query.list();
 }

I ran into this need again when I started using JPA with the Hibernate JPA provider. I didn’t want to write non-portable code so I didn’t want to have to access the underlying Hibernate Session object.

My first naive attempt looked something like this:


 public List<CustomerCityInfo> getCustomerCityInfo()
 {
     Query query = entityManager.createQuery("SELECT c.name, c.address.city from Customer c");
     List<Object[]> resultList = query.getResultList();
     List<CustomerCityInfo> customerList = new ArrayList<CustomerCityInfo>();
     for (Object[] obj : resultList)
     {
         CustomerCityInfo customer = new CustomerCityInfo();
         customer.setName(obj[0]);
         customer.setCity(obj[1]);
         customerList.add(customer);
     }

     return customerList;
 }

Yuck!

I had some spare cycles recently so I did some google searching and discovered the “SELECT NEW” syntax that JPQL supports. This allows you to do something like this instead of the monstrosity above:


 public List<CustomerCityInfo> getCustomerCityInfo()
 {
     Query query = entityManager.createQuery("SELECT NEW com.mycompany.model.CustomerCityInfo(c.name, c.address.city) from Customer c");
     return query.getResultList();
 }

Much cleaner syntax. Much easier to maintain.

Cheers!

By and large, the applications that we deploy are configured at runtime by using the Spring PropertyPlaceholderConfigurer. For those of you not familiar with this concept, basically, any environment specific properties in our application or even properties that we just want to be able to easily change, are defined using Ant style property syntax within our Spring beans configuration.

For instance, our DataSource configuration looks something like this:

<bean id="dataSource" class="com.atomikos.jdbc.nonxa.AtomikosNonXADataSourceBean"
          destroy-method="close">
      <property name="uniqueResourceName" value="My Primary Datasource"/>
      <property name="driverClassName" value="com.inet.tds.TdsDriver"/>
      <property name="url" value="jdbc:inetdae7a:${database.host}:${database.port}?database=${database.name}&amp;secureLevel=0"/>
      <property name="user" value="${database.user}"/>
      <property name="password" value="${database.password}"/>
      <property name="testQuery" value="select 1"/>
      <property name="maxPoolSize" value="50"/>
      <property name="minPoolSize" value="20"/>
      <!-- Setting reapTimeout=0 because atomikos incorrectly reaps nonxa connections. -->
      <property name="reapTimeout" value="0"></property>
    </bean>

By defining a PropertyPlaceholderConfigurer, we can have Spring inject the appropriate properties for this DataSource when the application starts up:

    <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="locations">
            <value>classpath:runtime-properties/${env}/my-application.properties</value>
        </property>
        <property name="nullValue">
            <value>null</value>
        </property>
        <!-- Force system properties to override any deployed runtime properties -->
        <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE"/>
    </bean>

For instance, my file runtime-properties/dev/my-application.properties would contain:

database.host=localhost
database.user=myuser
database.password=mypassword
database.name=myapplicationdb
database.port=1433

When we deploy our application, we deploy ALL of the runtime property files for all of the different runtime environments. By injecting the env property into the container at runtime, we tell the application which runtime properties to use, e.g.:

-Denv=dev

This is usually done either in the startup script or the startup configuration for the container. For example, in JBoss, we would add this to the run.conf file or the run.sh script.

This approach opens up some security issues, however. Now, all of my database credentials for all of my runtime environments are packaged up in cleartext in my single WAR artifact. If, for instance, the sys admin who manages our deployments in our testing environments (integration and staging) is not the same sys admin that manages our deployments in our production environment, especially if this is security related (only trusted senior administrators can manage production applications), we have a security hole because the credentials for all environments are stored in the WAR.

Historically, there has been no easy way around this. Typically, one would generate a secret key that could be used to decrypt the passwords if they were encrypted in the application configuration but then if you put this secret key IN the application deployment, you STILL have a security hole–any sufficiently determined adversary can use this key to decrypt your encrypted passwords.

Recently, I stumbled across the Jasypt project. It aims to provide some simplified interfaces into the Java Encryption API’s. The feature it offered that most appealed to me, however, was the integration that it offers for Spring that dovetails nicely with my configuration.

Jasypt offers a special implementation of the PropertyPlaceholderConfigurer called the EncryptablePropertyPlaceholderConfigurer that allows you to encrypt some of your properties in your configuration files and have them decrypted on the fly when the application is starting. What’s more important, is it allows you to pass the encryption key, or ‘password’ into the application as a system property at runtime, decoupling the knowledge of the key from the deployable artifact.

Let’s see how this would work

First we would need to decide on an encryption algorithm and a password. Then we would use the commandline tools that ship with Jasypt to encrypt our database password above:
./encrypt.sh input="mypassword" algorithm=PBEWithMD5AndTripleDES password=jasypt_is_cool
Jasypt will output something that looks like:

----ENVIRONMENT-----------------

Runtime: Sun Microsystems Inc. Java HotSpot(TM) Server VM 10.0-b23

—-ARGUMENTS——————-

algorithm: PBEWithMD5AndTripleDES
input: mypassword
password: jasypt_is_cool

—-OUTPUT———————-

AUP/WSfdvbAfVuBJW/kbXsqh2qu8yWiJ

Then, we update our configuration file and replace:

database.password=mypassword

with:

database.password=ENC(AUP/WSfdvbAfVuBJW/kbXsqh2qu8yWiJ)

The syntax ENC(...) tells the EncryptablePropertyPlaceholderConfigurer that this property is encrypted.

Now, we will add the beans for the encryptor and it’s configuration:

    <bean id="jasyptConfiguration"
          class="org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig"
          p:algorithm="PBEWithMD5AndTripleDES"
          p:passwordSysPropertyName="jasypt.encryption.password"/>

    <bean id="propertyPasswordEncryptor"
          class="org.jasypt.encryption.pbe.StandardPBEStringEncryptor"
          p:config-ref="jasyptConfiguration"/>

The important thing to take away from the above is the configuration. The property passwordSysPropertyName tells Jasypt that it should load the encryption password from a system property named jasypt.encryption.password.

Now, as I described above, I add a runtime property to my container so that this password gets injected into the runtime environment as a system property:

-Djasypt.encryption.password=jasypt_is_cool

The final step is to change the PropertyPlaceholderConfigurer and wire it up to this encryptor:

    <bean class="org.jasypt.spring.properties.EncryptablePropertyPlaceholderConfigurer">
        <constructor-arg>
          <ref bean="propertyPasswordEncryptor"/>
        </constructor-arg>
        <property name="locations">
          <value>classpath:runtime-properties/${env}/my-application.properties</value>
        </property>
        <property name="nullValue">
          <value>null</value>
        </property>
        <!-- Force system properties to override any deployed runtime properties -->
        <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE"/>
    </bean>

And that’s it! I’ve now added another layer of security to avoid having my production passwords floating around in my deployable artifact. It is generally a good idea to record this configuration in a secure location outside of source control that has limited access. This configuration should include the actual database password, the Jasypt encryption password used and the Jasypt algorithm that was used to generate the encrypted password, at a minimum.

Cheers!

Have you ever found yourself doing something like this in some code you’ve been writing?


String xml = "<adocument><anelement>" + someproperty + "</anelement><anotherelement>" 
                 + anotherproperty + "</anotherelement></adocument>";

Yuck!

I’m a lazy developer. And I know from experience that at some point, I’m going to have to change this embedded xml generating pile of crap. Someone will ask me to add an element, or remove an element, or add a namespace, or change the formatting, or …… Ugh! Maintaining this kind of code is what makes maintenance programmers go insane….

Fortunately, I’ve spent enough time doing maintenance development that I know that a little time spent upfront will save a lot of time and frustration down the road and I try to avoid naive solutions like the above in favor of something that appeals to my laziness.

One simple, yet effective, solution to the above problem is to use Velocity. Velocity is a powerful scripting language and makes short work of the simple example above. The main advantage is that the template is just text and text is easy to maintain.

Here’s how we’d go about refactoring the above solution…

First, I’d create a velocity template adocument.vm with the following content:

<?xml version="1.0" encoding="UTF-8"?>
<adocument>
  <anelement>${someproperty}</anelement>
  <anotherelement>${anotherproperty}</anotherelement>
</adocument>

Then, I’d configure my velocity engine in my spring context:

    <bean id="velocityEngine" class="org.springframework.ui.velocity.VelocityEngineFactoryBean">
      <property name="velocityProperties">
       <value>
        resource.loader=class
        class.resource.loader.class=org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader
       </value>
      </property>
    </bean>

The properties tell Velocity how to find the template files when I give it the template file path. I’ve specified that I want Velocity to use the ClasspathResourceLoader to find my templates, which are likely packaged up inside my application archive (war, jar).

Now, I just wire up the class that is generating the xml to the engine:

private VelocityEngine engine;

public void setVelocityEngine(VelocityEngine engine)
{
    this.engine = engine;
}
<property name="velocityEngine" ref="velocityEngine"/>

And tell it where the template is:

String templateFile;

public void setTemplateFile(String file)
{
    this.templateFile = file;
}
<property name="templateFile" value="myTemplatesDirectory/adocument.vm"/>

myTemplatesDirectory should be a directory in your classpath (packaged in your jar?) that contains your template.

Then, I change how I’m generating the xml:

Map model = new HashMap();
model.put("someproperty", someproperty);
model.put("anotherproperty", anotherproperty);
String xml = VelocityEngineUtils.mergeTemplateIntoString(velocityEngine, templateFile, model);

Note that this solution still violates the open closed principle but that can be solved by making the generation code a reuseable module and having the client pass in the variables:

public String generateContent(Map model, String template)
{
    return VelocityEngineUtils.mergeTemplateIntoString(velocityEngine, template, model);
}

You may still need to update how the model is generated if the view needs to change (e.g. to add a new field) but you won’t have to muck around with any of the software to make simple textual changes to the xml. That’s a solution that appeals to my laziness.

It should be obvious to some of you that this refactoring is essentially a simple MVC solution. Velocity can be, and frequently is, used in place of solutions like jsp to generate views. However, as shown above, it is a powerful tool in your toolbox that can easily be used in your middleware for generating content that could, for instance, become the payload of a JMS message.

Cheers!

We have quite a few applications that use, what amounts to, a hand rolled implementation of an ESB.

Typically, the system design includes several JMS queues which dispatch to a command chain for processing. The usage of the chain of responsibility pattern allows us to loosely couple components and processing logic.

In its simplest form, the system looks something like this:

Messages are received on the gateway and put on the associated JMS queue. The gateway can support a variety of pluggable protocols for inbound and outbound messaging(e.g. HTTP-REST, HTTP-WS, FTP, and even custom socket/udp based protocols). These gateway plugins are comparable with JBI Binding Components. Messages received by the inbound queue listener are dispatched to a processing chain which typically performs a variety of processing steps (which usually includes persisting some data to the database). At the end of the processing chain, a message is usually put on the reciprocal JMS queue constituting the reply to the original message.

The major difference between our application design and a canonical ESB is that our messages are persisted in the database while they are in flight and a unique identifier (e.g. PK) for the message stored in the database is passed through the system. An ESB, on the other hand, uses message passing semantics where the ENTIRE message is passed along.

Why is this distinction important?

Because the entire message is continually passed along between components in an ESB (via the Normalized Message Router), the receiver (e.g. JBI Service Engine) always has an up to date ‘view’ of the message. This is not always the case in our system and that can cause problems if we aren’t prepared for it.

We use an XA transaction to manage the JMS receive, any affiliated database updates and the terminating send of the message at the end of the processing chain. It’s easy, however, to misinterpret WHAT the XA semantics guarantee with HOW they operate in practice.

Consider the case in which we store a message in the database and send the PK for the newly inserted message to another JMS queue for subsequent processing. When the transaction commits, the transaction manager will eventually invoke commit() on each participant in turn. What is important to remember, however, is that the specification has no guarantees about WHEN the participants will be committed, only that when the transaction is complete, the state will eventually be consistent to an outside observer.

What if immediately after the transaction manager invokes commit() on the jms resource, the thread of execution is suspended? The receiver listening on that queue may very well receive the message and start processing it before the transaction manager ever even invokes the commit() on the database resource. As such, in typical transaction isolation semantics of Read Committed, this record will not be visible yet to the JMS receiver because it has not been committed yet by either the transaction manager or the underlying database resource. Assuming that the absence of the record indicated by the PK in the message is an unrecoverable error would be a programming error. It should be expected that this will occur and the system should allow for the message delivery to be rolled back and reattempted at some later date (when the database resource is synchronized with the rest of the system).

It is easy to be lulled into complacency in this scenario, because IN PRACTICE, you will rarely ever see the dreaded “record not found” error. This is because, in most situations, the commit happens so quickly on all the resources that the database IS in sync before the JMS listener receives the message. Furthermore, if you are using a non-XA database resource and relying on your transaction manager to emulate XA by using some variant of the Last Resource Gambit, this will almost always mean that the transaction manager commits the database resource BEFORE the JMS resource. However, relying on this to be the case is dangerous because we’ve come across at least one transaction manager (Atomikos) for which this didn’t hold true.

In summary, remember that XA transactions guarantee you WHAT the state of the system will be but make no guarantees on WHEN that will occur. Failure to observe this distinction may cause you sleepless nights when your software is in production.