Posted by: Brian de Alwis | May 12, 2011

Being a good neighbour by curing strange Google popups

[I haven’t come across any other reports of similar problems, so I thought I’d write this up quickly for anybody else encountering this same issue.]

I normally do my utmost to avoid doing computer support, but my neighbours had been suffering from ad popups for several months now and were at their wits’ end. As the issues seemed to have disappeared after having had a laptop re-installed, I figured it was malware and recommended they do some scans and repair. Since they hadn’t mentioned it since, I figured the problem was solved. But they were instead suffering in silence. They had tried everything — even contacting their ISP’s tech support.

The symptoms were that clicking on Google search results caused the link target to popup in a new tab. I find this kind of behaviour pretty handy (it’s generally how I do search), and I was prepared to write it off as some browser bar feature from their provider, Rogers. But my warning bells were triggered when clicking on Google search result would occasionally spawn a popup ad (many of which were broken) or redirect to “”. Hooking up tcpdump showed that a number of machines were being connected to, including,,,,

After some digging, I finally discovered that their router DNS settings had been changed to some Russian-based redirection servers (, These servers redirect Google and other sites to a server that rewrites links to be opened in new tabs. If the miscreants had been more subtle in their ad popups, my neighbours would likely never have complained. Imagine the data that these miscreants have been harvesting!

Unfortunately in this case the breach was introduced by the neighbours. They had a popup claiming that Mozilla Firefox had detected unusual network activity and asked that they change some settings to check whether it was a malfunctioning computer. This warning happened at the time that they started to notice these issues.

If you stumble upon this page as you’re having similar symptoms, check that your DNS settings are valid. If you’re uncertain, do a reverse-lookup of the DNS addresses and ensure that they match an organization you trust (e.g., your ISP). If you’re at all worried, then I’d highly recommend using Google’s Public DNS ( and And then do a round of malware and virus checking, and I’d also suggest change your banking passwords too.

[How are ordinary people supposed to figure this out?]

Posted by: Brian de Alwis | March 28, 2011

Bundle-NativeCode: Using platform-specific DLLs from OSGi

I’m currently doing some development on a large RCP application. One recent task required using a Windows DLL to open an email message through MAPI, which is made pretty easy with OSGi. But I encountered a few issues that I’m sure will bite other people.


OSGi now provides some support for resolving and loading shared libraries/DLLs in Java applications. A bundle declaratively specifies its DLLs and associated platform requirements in their bundle manifest using the Bundle-NativeCode header. This header value is then used when loading a DLL through System.loadLibrary().

TheBundle-NativeCode header specifies a number of comma-delimited clauses. For example:

Bundle-NativeCode: lib/win32/x86/http.dll; lib/win/zlib.dll;
    processor=x86; osname=win32, 
  lib/macosx/libhttp.dylib; lib/macosx/libzlib.jnilib;
    osname=macosx; processor=x86; processor=x86_64,

A clause specifies one or more DLLs. A clause is matched to the current platform using a set of parameters (e.g., processor and osname, but also osversion, language, or a selection-filter). Parameters of different types are ANDd together; parameters of the same type are ORd together, which can be tricky. Only a single clause is matched for the current platform, so multiple libraries must be specified within the same clause.

The trailing “*” indicates that the DLLs are optional. It means that the bundle will still resolve even if there are no other matching clauses. If this optional specifier is used, it must come last.

The power of Bundle-NativeHeader is two fold. First, a bundle won’t resolve if there is no matching clause for the current platform. If you don’t include the optional specifier (the trailing asterisk), your code is pretty much guaranteed to never throw an UnsatisfiedLinkError. Second, OSGi will resolve library name (e.g., “http”) to the appropriate DLLs for the platform, and will manage the DLL loading and unloading.


I’ve been bitten by a couple of “gotchas” with the Bundle-NativeCode header.

Gotcha #1: System.loadLibrary() must still be called

I incorrectly thought that OSGi would load the specified DLLs automatically. Unfortunately your Java code must still explicitly load the libraries using System.libraryLoad(). Happily you needn’t specify the path location or the platform-specific prefixes or extensions. So using the example above, the code could just call:


[In hindsight, this makes sense: you wouldn’t want DLLs loaded unnecessarily.]

Gotcha #2: Dependencies aren’t traced

OSGi doesn’t resolve DLL dependencies through the bundle’s dependencies. This hasn’t bitten me personally, but I have come across a few reports online.

Gotcha #3: “Missing host Bundle-NativeCode_0.0.0”: not as hazardous as may appear

The lonely asterisk (“*”) is not actually considered as a clause. It’s simply a configuration parameter that causes the bundle to continue to be resolved even if there are no matching clauses.

When there are no matching clauses, Equinox displays a message like:

!SUBENTRY 2 jmapi 2 0 2011-03-28 10:23:27.633
!MESSAGE Missing host Bundle-NativeCode_0.0.0.

This is an error message only if the asterisk has not been specified! If the asterisk has been specified, then it’s merely an informational message; the bundle may still be successfully resolved.

This differentiation has caused me grief on several occasions now as I assumed such a message was an error and the cause of subsequent errors.


The OSGi Core Specification describes the interpretation of the Bundle-NativeCode header. In OSGi 4.2, it’s specifically in §3.7 and §3.10.

There are a few blog posts describing how to use this facility. They were very helpful!

Posted by: Brian de Alwis | March 28, 2011

JMAPI: Compose email messages with attachments from Java

I recently created a small bundle, JMAPI, to provide support for programmatically opening an email message on Windows platforms using MAPI. Although the bundle’s functionality is restricted to Windows, the bundle is cross-platform. The API is through a single class, jmapi.JMAPI, which supports three methods, one of which queries whether MAPI support is available.

JMAPI was carved from the ruins of the old Java Desktop Interface Components (JDIC) project. With most of JDIC’s functionality has was absorbed into Java 6’s java.awt.Desktop class, JDIC seems to have died. Not to mention that its binary distributions were wiped out with Oracle’s transfer of to Kenai.

The one useful component from JDIC that wasn’t absorbed was its attempt to provide cross-platform support for opening mail messages in the user’s mail client. Java 6 only supports the mailto: URL scheme. Although mailto: is useful, it is length-limited and, more importantly, doesn’t provide for specifying message attachments; some clients provide for an “attachment” field, but its support is uneven.

Unfortunately JDIC’s binary distribution is not OSGi-friendly. As my immediate needs were for a way to programmatically open a message using MAPI, I simply ripped out the support from JDIC, and did the minimum to make it work as an OSGi bundle. I didn’t have the time to figure out how to rebuild the JNI support libraries (it has references to MFC and ATL), so I simply copied in the Windows DLL from JDIC 0.9.5; unfortunately this is 32-bit only. It may be possible to build using VisualStudio Express 2010 by someone with more Microsoft mojo than me.

Be aware that non-Outlook clients, such as Thunderbird, may need to perform a few extra steps to ensure that they are properly registered as the default email client.

Source is available up on github and is licensed under the same terms as JDIC with the LGPL.

Posted by: Brian de Alwis | March 18, 2011

Building an Eclipse product with Maven and Tycho

This article is part of a small series describing the use of several new open source components in my company’s new product, Kizby.

Kizby is built using Tycho. There has been quite a bit of buzz about Tycho recently, with several recent blog posts providing worked (and hopefully maintained) examples of using Tycho. And it’s well deserved: our experiences with Tycho have been very positive: the community is responsive, Tycho’s functionality is very capable, and (more importantly) diagnosing build issues is straightforward.

But getting started was a bit bewildering due to having to learn Maven coupled with the vagaries of building Eclipse-based products. And from the questions from newcomers to the Tycho mailing lists, I can see that I was not the only bewildered newcomer. Hence this post.

[For those at EclipseCon, I recommend you attend Monday’s tutorial on building with Tycho.]


When we began work in earnest on Kizby, we quickly reached the point where we needed to have a build and deployment process. As PDE/Build, the granddaddy of the various Eclipse build systems, is highlighted in many Eclipse reference books, it seemed a logical choice. The experience of coaxing forth a build from PDE/Build is perhaps the closest that I will ever get to understanding the pain and joy of giving birth.

But after a month or so, we hit a wall with PDE/Build: we wanted to build a set of four related products, whereas PDE/Build was designed to produce a single product. Although I could have jerry-rigged something with make(1) and PDE/Build to repeatedly invoke the build for each of the products, I knew there must be a better way.

And so I surveyed the field and discovered a bewildering set of technologies bearing bewildering acronyms or names. Even restricting consideration to just those tools for building Eclipse-based products still left a headache. p2, I learned, is not a build system. Athena Dash is a set of scripts built around PDE/Build, but is no longer a recommended approach. Buckminister has a whack of documentation but seems oriented towards build-from-the-IDE, whereas I wanted a headless build technology to future-proof myself for continuous integration [aside: Buckminster, I discovered later, can be used headless]. b3 is under development, and currently seems more geared towards manipulating p2 repositories. And there was a (then) newer contender, the Maven-based Tycho.

I had actually come across Tycho before I first embarked on PDE/Build. But my few exposures to Maven (shortly after the new millenium) had left me scarred. Maven projects had strange directory schemes, the seemingly essential pom.xml had little content, and the documentation was opaque (I think you had to understand Maven to understand the documentation). For a tool that was positioned as a do-everything solution that would build, test, assemble, and deploy a system, I couldn’t even figure out how to get started. After some desperate web searches, I figured out the minimum to make a project usable within Eclipse (with the magical mvn eclipse:eclipse), I ran away.

But I was now smarter — and desperate. And fortunately Maven has matured, and there are now some great tutorials and free e-books available. Building using Tycho turned out to be a bit simpler than I expected. And Maven actually does build, test, assemble, and deploy too.

Maven and Tycho

Tycho is actually a set of extensions (unfortunately called “plugins” too) for Maven to build Eclipse/OSGi-based components. Key to using Tycho is to understand Maven. And hence the reason for this blog entry: the existing Tycho docs (which are admittedly a bit minimal) assume prior knowledge of Maven, of which I had little.

A Little Bit About Maven

Maven is often described as being opinionated. You provide a declarative specification of what you want built, and Maven decides how it will happen. For those used to procedural build systems, it requires a bit of an adjustment.

The declarative specification is placed in a file named pom.xml, called the Project Object Model or POM. In Maven, a directory with a pom.xml is called a project. The pom.xml describes the purpose of the project, called the packaging type (e.g., to produce a jar file, or a war file, or a bundle), and includes other information such as the project’s dependencies, etc.

What’s nifty is that a project inherits information, both from its parent project (usually in the parent directory, in which case this project is a subproject), and from Maven’s own super POM. So providing your project conforms to Maven’s opinionated directory layouts, you rarely have to actually write anything beyond some minimal XML boilerplate: the pom.xml only serves to inform Maven of changes from the defaults.

For a project with a jar packaging, Maven automatically knows how to (1) generate any necessary resources, (2) compile the java files in src/main/java, (3) test using the unit tests in src/test/java, (4) jar up the class files, etc. Maven only needs to know a few details such as the project’s name and any dependencies. And hence my confusion from years ago upon seeing a seemingly near-empty pom.xml file.

More on Projects

With Maven, each project builds some artifact. A project may have many subprojects, typically organized as subdirectories, and each of which contributes to the creation of the project’s artefact. The type of the artifact is described by the POM’s packaging directive (e.g., a Java project is generally a jar or war; an Eclipse plugin with Tycho is eclipse-plugin).

An artifact is identified by a :: tuple, called the coordinates, which are provided in the pom.xml. The artifactId is typically taken to correspond to the directory name, and the groupId the logical purpose. In building Kizby, most projects have and the artifactId is the bundle symbolic id like (Note: there’s actually 2 other coordinates, the packaging and classifier, but they don’t seem to be talked about much.)

Maven actually doesn’t do much on its own, but instead delegates to various plugins (similar in concept, but different from Eclipse plug-ins). These plugins actually do the stuff that you would stick in a make(1) directive or a sequence of Ant tasks. Maven uses the packaging type to determine a set of phases, called a lifecycle, for building and deploying a project of that type, and then calls out to various plugins as it progresses through the different phases. The lifecycle is similar to typical all target found in most Makefiles.

all: clean build zip deploy

And in Maven/Tycho, a make all corresponds to:

$ mvn clean deploy

The compilation and zipping is done as part of the deploy.

For more information on Maven, I highly recommend you look at Sonatype’s free book, Maven By Example.

Back to Tycho

Tycho is actually a set of Maven plugins for compiling, resolving, and provisioning using OSGi. It provides a different set of packaging types for building bundles, features, and tests (e.g., eclipse-plugin, eclipse-feature, eclipse-test-plugin). Whereas the traditional Maven Java plugins pull configuration settings such as dependencies, compiler versions, etc. from the pom.xml, Tycho’s plugins use the information encoded in the OSGi/Eclipse manifests (e.g., META-INF/MANIFEST.MF); the traditional approach is often called pom-first vs Tycho’s manifest-first approach.

Tycho also provides a bridge between Maven coordinates and OSGi identifiers. Tycho usually assumes that the artifactId should be the bundle’s symbolic name. The groupId doesn’t really have an analogue in the OSGi world, so I use it as a logical grouping. The version should match the bundle or feature version, though Maven uses a -SNAPSHOT suffix instead of .qualifier.

So a complete bundle’s pom.xml looks like:

That’s it! Seriously! Remember that a project inherits from its parent, so this bundle will have the same groupId. Features are very similar. Of course you need to provide the parent’s definition, which has some other stuff to define the Tycho plugin versions to be used and repository definitions for resolving bundles (e.g., the p2 repository at But that’s pretty straight forward and covered in other tutorials.

Go Forth and Build

At this point, you should hopefully have a very general overview of Maven and should be ready to read a bit from Maven by Example and work through Chris Aniszczyk’s example.

Tricks or Gotchas

There are many questions that come up on the Tycho mailing list, but most are PDE- or p2-related rather than Tycho-related. Unfortunately getting to the position where you know the difference is painful. Here are the things I’ve found out the hard way:

  • The OSGi resolver and p2 resolver are not the same.
    You may receive resolution errors in building that do not occur when running in Eclipse.
  • Startups are different.
    When run from the debugger, plugins are automatically started. They aren’t in a product.
  • Creating .product.
    There are some bugs with the p2 publisher that prevent publishing products on different platforms. There are workarounds.
  • Deployment.
    I don’t yet use Maven to control deployment to servers, and instead use some shell scripts. But there are some nice examples out there though.

I’ve started documenting these issues on the PDE FAQ and the Tycho FAQ; feel free to add your own.


Posted by: Brian de Alwis | March 10, 2011

On the perils of creating plugins from jars

Eclipse JDT/PDE has a handy feature where you can make a plugin/bundle from an existing JAR. (File -> New -> PDE -> Create plugin from Jar). This might seem to be a brilliant feature, but it has a downside in that Eclipse JDT doesn’t check dependencies of the resulting class files.

This bit me recently when making a bundle from the JUNG libraries. Fortunately Peter Krien’s bnd does a great job (instructions), and there’s now some nice-looking tooling too.

BundleClassLoader[edu.uci.lcs.jung.algorithms_2.0.0].loadClass(edu.uci.ics.jung.graph.Hypergraph) failed.
java.lang.ClassNotFoundException: edu.uci.ics.jung.graph.Hypergraph


Posted by: Brian de Alwis | March 8, 2011

Expert vs n00b: Effective Java searches in Eclipse

I reviewed a paper a while back that described how a set of Java developers used the search tools in Eclipse to explore a code base.  Having performed a number of studies involving very smart developers in addition to being a proficient Eclipse developer myself, I surmised that the developers were not very proficient users of Eclipse.  What made them ineffective — and more important, how can a developer become more proficient?

In this study, the developers were asked answer questions that required finding elements within the source code for a system that corresponded to domain concepts.

The biggest clue for me as to their inexperience was that all the developers used the File Search and Java Search dialogs exclusively.  Some apparently used only the File Search! I was so surprised that I contacted the paper authors to confirm (i) that it was not an imposed restriction, and (ii) that their traces hadn’t lumped the use of the search dialogs with other search mechanisms (they were separate). Although these dialogs are useful for grep-style searches, such as for finding a message property, they are nowhere near as quick to use as the shortcut-driven searches like Open Declaration (F3) and References > Workspace (CtrlShiftG). And no expert developer would only use the File Search dialog!

Further confirmation came from a breakdown of what the developers were searching for. Many of the searches involved searches to map domain concepts to classes and interfaces. No proficient developer would use the Java Search dialog to find a class; they would instead use the Open Type dialog. The Open Type dialog is reactive, providing immediate feedback; the Java Search would take at least 10 ten times as long.

Understanding the expertise of participants is essential for drawing conclusions and making recommendations. In this case, I suggested that the authors make recommendations about improving instruction for new or inexperienced developers.

So what indicators correlate with expertise?

From my observations, one mark of expertise is inversely correlated to mouse use: the more mouse usage, the more a n00b. Experts seek out the functions that can be performed at the speed of thought. Dialogs and mouse pointing are interruptions.

Another sign of expertise is knowing and tailoring advanced options. One of the first actions I take with a new Eclipse installation is to disable the “Show Approximate Matches” search option: when I do a references search, I want to see exactly the callers to that method. Why would I want thousands of potential matches cluttering the view? If I wanted ineactness, then I would use the Java Search dialog!

[In fact, I never use the JDT searches any more: I use my own tool, Ferret, that provides far more focussed queries. I’ve let it slide though, and its PDE searches are a bit broken in 3.7. I’ll try to fix those.]

Ducky Sherwood had some interesting findings finding that experienced developers exhibit more breadth-first-search behaviours when navigating in code vs the depth-first-searches seen in novices.

What signs can you think of that distinguish novice from expert?

[Update: the paper in question is: J Starke, C Luce, J Sillito (2009). Searching and Skimming: An Exploratory Study. In Proc Intl Conf Software Maintenance (ICSM).]

Posted by: Brian de Alwis | February 14, 2011

ClassNotFoundException with OSGi/Eclipse bundles

Today my JUnit Plugin tests decided to go on strike and throw ClassNotFoundException exceptions rather than run my master test suite. Although I was fairly confident in the small changes I had made couldn’t have ruined anything, I preferred confirmation from JUnit’s green bar.

After an hour of cursing, enabling all manner of debugging and tracing options on org.eclipse.osgi, and finally stepping through the classloader (not fun at all), I discovered that the test bundle’s BundleData had an empty classpath. Bingo! Adding the following single line to the MANIFEST.MF solved the issue:

  Bundle-ClassPath: .

Although I understand the necessity of this line, I’m at a loss to explain why my tests have been running perfectly well for the last several months.

Posted by: Brian de Alwis | January 21, 2011

About Kizby

As some of you may be aware, my company (Manumitting Technologies) recently released our first product called Kizby. Kizby is a project and task planning tool that moves beyond todo list and task managers to bring the power of large project planning tools to individuals. We offer a free no-risk 30-day trial. You can find more at its website

Kizby is built using a number of new open-source components like Eclipse e4 and Tycho, and I’ve become quite involved in some of them. Since the release, I’ve had a few queries about these components. I’ll address some of the questions in the next couple of articles.

Posted by: Brian de Alwis | December 13, 2010

Configuring Jetty for SSL

Having finally achieved victory in setting up SSL for a Jetty instance, I thought I’d share one gotcha that I saw go undocumented: if your certificate requires an intermediate certificate, then you need to add your certificates to your keystore as a chain — you can’t just import the certificates in your chain one at a time.

I’m using an SSL certificate from StartSSL. I have a Class 2 certificate, which is signed through an intermediate certificate for their Class 2 Server CA. I need to add both my signed certificate as well as a certificate for the Class 2 Server CA.

After lots of futzing about adding the certificates individually (and wondering if I’d unknowingly missed a certificate), I finally realized that the answer was under my nose, described in Jetty’s Step 3b in How to configure SSL:

$ cat my-signed-ssl-certificate.pem startcom-ca.pem > cert-chain.txt
$ openssl pkcs12 -export -inkey my-ssl-key.pem -in cert-chain.txt -out keystore.pkcs12
Enter pass phrase for my-ssl-key.pem:
Enter Export Password:
Verifying - Enter Export Password:
$ keytool -importkeystore -srckeystore keystore.pkcs12 -srcstoretype PKCS12 -destkeystore keystore
Enter destination keystore password:  
Re-enter new password: 
Enter source keystore password:  
Entry for alias 1 successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled

Note that the certificates and key were combined into a single entry. I previously had 3 entries.

If the openssl pkcs12 complains that No certificate matches private key then ensure the cert-chain.txt has each

-----BEGIN XXX-----


-----END XXX-----

on separate lines.

And even on Java 6 I found I had to ensure the keystore password was the same as the key password.

SSLPoke is a useful Java tool for discovering SSL issues.

Posted by: Brian de Alwis | November 12, 2010

The world needs a set of standard website / software agreements

After wading through far too many terms of use, terms of sale, and privacy policies, I’ve come to the conclusion that the world needs a set of standardized service and software agreements.

Does anybody even read through these seemingly-endless pages of bafflegab? I try to skim them, but I often give up after 10 pages or so. And now that I’m trying to define my own terms, where I’ve tried to use clear english, it’s clear that we need to rethink the problem.

Something modelled on the Creative Commons, with its short-form codes like “cc-nc-by“, would be excellent. These codes provide a quick overview with a single glance.

Most companies’ agreements could be represented by something like: nw-nd-in for no-warranty-except-where-we-gotta-by-law, no-disclosure, and you-indemnify-us.

« Newer Posts - Older Posts »