Logo Computer scientist,
engineer, and educator
• Articles • Articles about computing

How to extend the remote monitoring facilities of GlassFish using MBean cascading

The problem

A common problem with large production application server clusters is being able to monitor the health of each instance in the cluster from a central point. A typical case might be monitoring the amount of JVM heap memory that is used, with a view to triggering an alert if the memory usage appears to be excessive. Alternatively, we might want to monitor the CPU or I/O load to ensure that load is being shared evenly between the instances. The JVM does expose this information via the JMX API, but that still leaves the problem of getting access to it remotely, particularly when we might not know which instances in the cluster will be up at any given time.

The solution to this problem is to monitor the instances in the cluster from the Domain Administration Server (DAS), and make use of the MBean cascading process that GlassFish implements. By means of cascading, the DAS will aggregate the MBeans from each instance of the cluster, and present them to any JMX client. The client needs to know something about the way GlassFish names the cascaded MBeans, but this is not particularly complex. Other than that, the client can be anything that speaks JMX — jconsole for example. Or you can implement a custom client, perhaps in a servlet which the administrator has access to. The MBean cascading process takes care of all the networking complexities, so the client need know nothing of the network topology.

This article describes step-by-step how to code, compile, deploy, and use a custom MBean to provide a view of JVM heap usage on each instance in a GlassFish cluster. There are other ways to obtain this information, I suspect, but the principles described here can be used for all sorts of control and monitoring.


In this article I assume that you have a working GlassFish installation with a working cluster or a set of stand-alone instances (it makes no difference for our purposes here whether the instances are technically in a cluster or stand-alone, so long as they are in the same administration domain). It does not matter whether the instances are on the same physical host or different hosts, so long as you deploy the MBean code (i.e., the .class files) in the relevant places on each physical host.

I also assume that you have a reasonable working knowledge of the JMX specification, although you don't need to know a whole lot about JMX to use the technique described here.

In this article I am using only simple command-line tools, on a Linux system. This is to make it absolutely clear what's going on at each stage. Of course, you can develop and package MBeans using whatever tools you use for other kinds of Java development.

How MBean cascading works

(This section is quite technical, and you may prefer to look at the code example that follows before tacking it.)

Like everything in the Java world, an MBean is nothing more than a class and an interface that conforms to some design pattern. MBeans are quite similar to Enterprise JavaBeans (EJBs), not only in the way they are implemented, but in the way they are accessed remotely via proxies, and managed inside a container. For our present purposes, the container is the GlassFish administration infrastructure, which builds on the JVM's default MBean server.

Each GlassFish instance creates and maintains a set of MBeans — those it uses itself, and any custom MBeans installed by the administrator. These MBeans together expose most of the management functionality of the application server instance.

Each MBean is identified within its own container by its object name. An object name is formed from a domain (a single text string) and a number of name-value pairs called key properties. This naming scheme makes it easy for clients to locate sets of MBeans according to their functionality. The JMX API provides methods for enumerating MBeans by domain, or by properties that match specific keys.

In order to manage an instance through its MBeans, there has to be an infrastructure for remote operations on MBeans. In GlassFish, this infrastructure is provided by the Java Dynamic Management Kit (JavaDMK). The JavaDMK provides services to manage MBean instances and establish communication between them, among other things. In practice, the communication is based on Java RMI — just like with EJBs, and carried over the IIOP protocol — just like with EJBs. So once a management client has used the JMX API to locate a particular MBean, it can create a local proxy for it, and then interact with the proxy just as if it were an ordinary Java object. Anybody who has worked with EJBs will find this pattern familiar, although it's straightforward enough even without such experience.

Because the JavaDMK makes MBeans remotely accessible, it becomes possible to create a single management point, with that management point using the remote access facilities to delegate administration operations to the relevant instance. The management point is known as the master agent in JMX-speak, and the places to which administrative operations are delegated the sub-agents. In principle, a sub-agent can be a master agent for another sub-agent, which is the origin of the term 'cascading'. In practice, such a use of JMX seems to be quite unusual. In GlassFish, the DAS acts as the master agent for a number of instances, which are the sub-agents. Consequently, an administrative client can administer any instance through its proxy on the DAS.

In the JavaDMK documentation, the process of making a sub-agent available to a master agent is called mounting. That is, the MBean in the sub-agent (called the source) is 'mounted' in the master agent, and becomes the target). The analogy with mounting a filesystem is significant here. When you mount a filesystem in a Unix system, you mount it at a particular point in the filesystem hierarchy. The location of a specific file is a determined by a name which is an aggregate of the name it has on its own filesystem, and the name of the mount point. And so it is with MBean cascading: the name of a specific MBean on the master agent is an aggregate of the name used to register it in its own MBean server (i.e., its own instance of GlassFish) and the name used to mount it on the master agent. Unlike a filesystem mount, however, there are no specific rules about how the mount point for MBeans is named. The master agent is free, more or less, to create whatever name mappings it likes.

With GlassFish, it is important to understand that the master agent (DAS) mounts all custom MBeans with no name transformation. In filesystem terms, that's a bit like mounting everything at the root directory. Consequently, you can't have two custom MBeans in different instances with the same object name. Well, actually you can — the effect is exactly the same as mounting two filesystems on the same mount point: only the most recently mounted one becomes available, and the earlier one is obscured. This is almost certainly not what you want, but fortunately the administration tools take care of this potential naming conflict.

When you register an MBean to be used with GlassFish for remote administration, its name is transformed at registration time. The administration tools add to the object name an additional key property: server=[instance_name]. Because GlassFish requires each instance in a domain to have a unique name, this provides an unambiguous name for the MBean. So if you use the administration tools to install an MBean across a whole cluster, and choose to give it the name

It will actually get registered on each instance in the cluster with a different name:
So when the instance MBeans are mounted on the DAS, they have names that are unique in the DAS, even though the DAS performs no name transformation of its own. A corollary of this name mangling scheme is that if you install an MBean in an instance programatically, rather than administratively, you will have to take over the name mangling yourself, or risk having the MBeans obscuring each other on the DAS.

Coding and compiling the MBean

Our Mbean will expose exactly one property — percentUsed, which is a measure of the percentage of the heap memory currently in use. Of course, in practice the MBean can be as simple or as complicated as required. Note that the MBean has access to the JVM's JMX infrastructure, and can freely use that to gather information about the JVM.

Following the JMX specification, the MBean is in two parts — an interface and an implementation. Here is the interface, which is defined in a file HeapSizeMBean.java:

package com.kevinboone.mbeans.heapsize;
import java.lang.management.*;

public interface HeapSizeMBean
public int getPercentUsed ();
And here is the implementation, in the file HeapSize.java:
package com.kevinboone.mbeans.heapsize;
import java.lang.management.*;

public class HeapSize implements HeapSizeMBean
public HeapSize () {}

public int getPercentUsed ()
  MemoryMXBean m = ManagementFactory.getMemoryMXBean();
  MemoryUsage mu = m.getHeapMemoryUsage();
  float used = (float) mu.getUsed();
  float max = (float) mu.getMax();
  return (int) (used/max * 100.0);
Note that GlassFish won't use the interface at all, only the implementation class. But if you don't provide the interface the deployment process will reject the MBean as not complying with the JMX specification.

I have put these source files into a directory structure like this:

$ java -d target src/com/kevinboone/mbeans/heapsize/*.java
This will put the compiled classes into the correct tree structure under the target directory, so we end up with this:
I am labouring this point about the directory structure, which many developers will find completely obvious, because GlassFish is extremely fussy about how MBeans are presented to it, as I will explain.

Deploying the MBean

Deploying the MBean has two parts: making the compiled code available, and registering the MBean with GlassFish.

Making the code available

The official, documented place to install MBean code in the DAS is
Where $DOMAIN is the directory containing the domain-specific files. In most Unix installations this directory will be something like:
That takes care of the DAS, but of course you'll still need to make the code available to individual instances. If these instances are on different physical hosts, then naturally you'll have to copy the files to that host.

The place to install instance-specific MBean classes is exactly analogous to that on the DAS:

Because instances are managed by a node agent, the location of the directory $INSTANCE will depend on the node agent. Conventionally, if there is only one node agent defined, it will take the same name as the host's primary hostname. So typically the $INSTANCE directory will be of the form:
You'll need to copy the Java class files to the $INSTANCE directory of each instance that has to be monitored.

Now here's the slightly tricky bit, and why I'm demonstrating this process using command-line tools. At the time of writing, GlassFish does not support the deployment of MBean classes in JAR files. The full directory tree that matches the Java package name has to be built under each mbeans directory. This is trivially easy at the command line, and fiddly with most graphical development tools.

Having compiled the Java classes as described above, all I need to do to install the code in GlassFish is to copy the contents of the target directory, maintaining the structure. For example:

$ cp -pr target/* /opt/SUNWappserver/domains/domain1/applications/mbeans
$ cp -pr target/* /var/opt/SUNWappserver/nodeagents/hostname/instance1/applications/mbeans
$ cp -pr target/* /var/opt/SUNWappserver/nodeagents/hostname/instance2/applications/mbeans
Of course, some network copying will be involved in a real, distributed cluster. An alternative approach is to build the MBean classes into a JAR archive (all IDE tools can do this), copy the JAR archive to the appropriate place on each host, and then unpack it using the command line jar utility. However you install the classes, the crucial point is that the directory structure must be correct.

In principle (according to the GlassFish documentation) if you deploy the code this way, you should not need to restart anything for the classes to become available to the JVM. The class files will not be read into the JVM until you register the MBean. However, my experience is that you do need to restart instances (but not the DAS) for the MBean registration (see next section) to take effect.

Registering the MBean

Although you need to make the Java classes available to each instance, the registration process is done entirely from the DAS, using the asadmin utility. If you're using stand-alone instances controlled by the same DAS, you need to invoke asadmin once for each instance you want to install the mbean on. If you're using a proper cluster, you can just invoke it once for the whole cluster. If you want to install the mbean on the DAS itself, then a separate step is needed for this. In practice, you probably won't need to install on the DAS — it is not necessary to do this for the DAS to aggregate the MBeans on the instances.
$ asadmin create-mbean \
    --objectname "user:type=com.kevinboone.mbeans.HeapUsage"
    — target $CLUSTER_NAME
$CLUSTER_NAME is the name of your GlassFish cluster, or a single stand-alone instance, or the literal text 'server' to deploy on the DAS.

The objectname is arbitrary, so long as it conforms to the JMX naming rules, and should be chosen to make it easy for your JMX clients to find. The part of the name before the first colon is called the domain in JMX-speak (nothing at all to do with application server domains). The part after the colon is called the key properties, and must be of the form name=value. The user domain is strongly recommended.

Note that the final argument to create-mbean is the name of the MBean implementation class, not the interface. As I pointed out before, GlassFish does not use the interface. JMX clients, however, might use it — see below for an example of this mode of operation.

Using the MBean from a JMX client

You should now be able to see the deployed MBeans by pointing a JMX client at the DAS. For an initial test, jconsole should be fine. By default, the JMX interface on the DAS listens on port 8686, and authenticates using the domain's ordinary admin user and password. In the screenshot of jconsole below, the MBean is deployed on two instances and the DAS. You can see the single property PercentUsed exposed.

In practice, in a production environment you'd either have a JMX monitoring framework in place already, or you'd implement some kind of custom monitoring application to check or display the values obtained from the MBeans. The simple application presented below shows the outline of such a custom application.
package com.kevinboone.mbeans.heapsize;
import javax.management.*;
import javax.management.remote.*;
import java.util.*;

public class HeapSizeClient
public static void main (String[] args) throws Exception
  // Create a URL for connecting to the DAS. The required URL can be
  //  found by looking at the DAS server.log file — it is output when 
  //  the appserver starts. If you're monitoring remotely from the DAS,
  //  of course, you'll need to substitute the proper hostname

  JMXServiceURL url =
    new JMXServiceURL("service:jmx:rmi:///jndi/rmi://localhost:8686/jmxrmi");

  // Connect to the DAS JMX service using the admin user ID and 
  // password

  Hashtable env = new Hashtable();
  String[]  credentials = new String[] {"admin", "pigsfly2"};
  env.put (JMXConnector.CREDENTIALS, credentials);
  JMXConnector jmxc = JMXConnectorFactory.connect(url, env);
  MBeanServerConnection mbsc = jmxc.getMBeanServerConnection();

  // Get a list of MBeans from the DAS 

  Set<ObjectInstance> instances = mbsc.queryMBeans(null, null);
  for (ObjectInstance instance: instances)
    // Use the 'type' key property to determine if this MBean is one
    //  of ours. Note that, depending on the number of instances in the
    //  cluster that are up, there could be any number of matches

    String type = instance.getObjectName().getKeyProperty ("type");
    if ("com.kevinboone.mbeans.heapsize.HeapSize".equals(type))
      // All MBeans in GlassFish will have a 'server' key property in
      //  their names. We use it here only for display

      String server = instance.getObjectName().getKeyProperty ("server");

      // Create a proxy MBean we can interrogate

      HeapSizeMBean proxy =
        JMX.newMBeanProxy(mbsc, instance.getObjectName(),
         HeapSizeMBean.class, true);

      // Call the relevant method on the proxy

      int percentUsed = proxy.getPercentUsed();

      // Print the result for this MBean   

       ("Instance: " + server + ", heap usage: " + percentUsed + "%");
Note that although GlassFish does not use the MBean interface, this client does. Creating an MBean proxy that implements the MBean interface saves a certain amount of reflective examination of the MBean and therefore simplifies coding. But it does mean that the interface and the class must both be present when you compile and run the client. On my setup, running the client produces the following result:
$ java -cp target/ com.kevinboone.mbeans.heapsize.HeapSizeClient
Instance: test4inst2, heap usage: 10%
Instance: test4inst, heap usage: 5%
Instance: server, heap usage: 23%
Because the cluster is idle, the DAS (instance name 'server') has the largest heap usage. Note that this code:
  String server = instance.getObjectName().getKeyProperty ("server");
Retrieves that part of the MBean's object name that was added by the administration tools to disambiguate the MBean names from the various instances. But it's actually helpful in this scenario as well as necessary, since it gives the JMX client a very easy way to find out which instance it's looking at.


If you know even a little about JMX, it's surprisingly easy to extend the control and monitoring facilities of the GlassFish application server, and you get all the remote management for free. But best of all, the approach described in this article does not rely on any nasty kludges — it is a fully documented and supported feature of the product.
Copyright © 1994-2013 Kevin Boone. Updated Feb 08 2013