Adaptive log level based on error context

The other day I was thinking about my companies’ application logging system, and wondered why we don’t increase the log output when there is a problem? I mean doing this automatically.

If a method in a class is logging at the ERROR level, and an exception occurs, the log output is useful at that level, it contains a stack-trace, for example. However, there are more levels below ERROR that contain much more info that could be useful in maintaining the system. Levels are used to control the logging output in logging systems. For example, in java.util.logging there are seven default levels: SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST.

One way of getting at this useful information is by the class with the detected exception setting the log level in that problem method or function to a more verbose level. The algorithm would probably be similar to the Circuit Breaker design pattern.

Like “First Failure Data Capture” this approach could be called Nth Failure Data Capture.

Of course, while this may be easy to do this programmatically, in practice, this is not simple approach. Many questions remain: performance; resources; is one error occurrence enough to trigger a change; are all threads effected; which target level; how much logging, how to reset the level, and so forth.

Funny, I’m sure this is not a new idea. A quick search makes it look like it is not a well-known approach.


  • Record everything instead of logging just some things. This is possible with some systems, for example Chronon?


Application logging using unique id

For programs that write to log files a best practice is to include a tracking ID. What should this ID be and how to use it? The following is presented as a ‘design pattern’.


At the earliest inception of a ‘process’ create an ID which is a combination of an ‘origin id’ and ‘unique id’. This non-business related global inception ID (GIID), should be used for every log output to provide machine readability and tool use. When each new business level or operational id is obtained or generated, it is logged to provide an ‘association’ with this GIID. This is similar to various existing practices so it is presented as a Software Design Pattern.

Author: Josef Betancourt, Jan 11, 2015

CR Categories: H.3.4 [Systems and Software]; D1.3 [Concurrent Programming]; D.2.9 [Management]; K.6 [MANAGEMENT OF COMPUTING AND INFORMATION SYSTEMS]; K.6.3 [Software Management]

  1. Context
  2. Forces
  3. Solution
  4. Consequence
  5. Implementation
  6. Appendix
  7. Further reading


App Logs

An application log file records runtime programmatic events: details, exceptions, debug, and performance related measures. This is different than specialized server log files, such as a webserver’s access logs, error.log, etc. The latter are more standardized, for example with W3C Extended Log File Format, and well supported.

App logs usually capture details at specific logging levels and named contexts. In the Java ecosystem there are plenty of libraries to support this and now the JDK supports this as part of the java.util.logging package.

Despite the advances made in logging APIs and support within operating systems and frameworks, app logs are at a primitive level of software maturity. What A. Chuvakin and G. Peterson describe as the “… horrific world of application logging” is composed of ad hoc, minimally documented, unstructured, untested, and under-specified delivered components.

Attempts to create widely used industry standards have failed and every business, software project, dev team, or industry reinvents and attempt to tackle the same problems.


In the context of server app logs, multiple sessions will output log information that can be intermixed. These sessions can be part of an ongoing process, such as a user interaction with a web site.

External integration points (web services, database, etc) may also be invoked. Unless each session is identified in the log and integration logs, subsequent support, debug, and auditing are very difficult.

The problem is not just tracking where and when ‘integration’ occurred or its non-functional integration criteria (i.e., timing), but the tracking of subsequent logging, if any, at that location.

App logs are used extensively during development. Their importance is illustrated by an old mime “debuggers are for wimps”. As such, logs with impeccable tracking used for design and test are a good Return On Investment (ROI).

The same is true for deployed systems. In many cases the only information available on historical behavior is in a log file.

This seems like a programming 101 lesson, but it is widely disregarded in practice. That log file output is a minor concern and sometimes not even part of a “code review” is puzzling.


1. A service must invoke a distributed call to another system. The service has retry logic, and logs each failure. If each log output does not identify the session or operation, the retries could get mixed with other threads. Identifying an end user’s request is a hit or miss bracketing of time stamps if the development team did not enough identifiable data in each log output.

2. A computer savvy end user or family may attempt to register into your system with multiple browsers simultaneously. This could cause problems if multiple logins are supported and an error occurs. How do you track this and debug it?

3. The app server makes a remote call to a service integration point and that service fails. How is the owner of that service informed as to the specific invocation? There are probably deployed systems where one would have to compare time stamps on log output to even coordinate where the two systems communicated and even then it is vague. Some systems may not even do any logging and the unless there is a fault of some kind.

4. You have to identify time periods based on hazy user complaints, search through multiple log files with regular expressions, then walk each output to recreate a specific error scenario. Isn’t this manual drudgery what computers were supposed to eliminate?


Global Inception ID

Logging with Unique identifiers is encouraged as a best practice:

“Unique identifiers such as transaction IDs and user IDs are tremendously helpful when debugging, and even more helpful when you are gathering analytics. Unique IDs can point you to the exact transaction. Without them, you might only have a time range to use. When possible, carry these IDs through multiple touch points and avoid changing the format of these IDs between modules. That way, you can track transactions through the system and follow them across machines, networks, and services.” —

This unique identifier is generalized so that on first entry into a system or the start of a process, A Global Inception ID (GIID), is assigned to distinguish that ongoing process from others. A more descriptive term would be a Global Tracking ID, but that conjures up privacy and security concerns and is already being used in commerce for a different purpose. But ‘inception ID’ brings up visions of barcodes on people’s foreheads. Ok, how about “bludzwknxxkjysjkajskjjj”?

The term “Global” is to indicate that this ID is unique in a specific enterprise system. The uniqueness comes from its creation on a first contact basis on a specific subsystem. In essence this is a log tracking ID.

For example, a web server or an app server would be the first point of contact or request from an external User. The GIID, consisting of a combination of origin id and a unique id, would be created at this point. GIID ::= originID uniqueID

In article “Write Logs for Machines, use JSON” Paul Querna uses the term “txnId” for this type of ID:

“… this is essentially a unique identifier that follows a request across all our of services. When a User hits our API we generate a new txnId and attach it to our request object. Any requests to a backend service also include the txnId. This means you can clearly see how a web request is tied to multiple backend service requests, or what frontend request caused a specific Cassandra query.”

Another term for this GIID, or ‘txnId’ is Correlation ID. This terminology is used in SharePoint.

The correlation ID is a GUID (globally unique identifier) that is automatically generated for every request that the SharePoint web server receives.

Basically, it is used to identify your particular request, and it persists throughout further communications, if any, between the other servers in the farm. Technically, this correlation ID is visible at every level in the farm, even at a SQL profiler level and possibly on a separate farm from which your SharePoint site consumes federated services. So for example, if your request needs to fetch some information from an application server (say, if you are using the web client to edit an Excel spreadsheet), then all the other operations that occur will be linked to your original request via this unique correlation ID, so you can trace it to see where the failure or error occurred, and get something more specific than “unknown error”. —


Contextualized IDs
Another related term is called “Contextualized IDs”, by Michael Nygard:
“… it’s important that IDs carry along their context. It isn’t enough to have an alphanumeric Policy ID field. You need a URN or URI to identify which policy system issued that policy number.”
— Micheal Nygard, ‘Inverted Ownership,

Various ‘Structured Logging’ efforts or syslog implementations already contain a ‘sending’ field specification. The GIID incorporates the sender id as the Origin ID, and this combination is more amendable to human and textual tools parsing.




A good candidate for a GIID must be large enough to satisfy uniqueness requirements. This could be, for example, a 36 character field. Where the log files are manually inspected with a text editor, this increases the log line which already contains many common fields like a time stamp.


Unintentionally, “bad” logging practices makes it harder to track and correlate personally identifiable information (PIN). With the use the trans-system GIID, correlation between various business related identifiers is made easier.

The correlation ID is not necessarily a secret, but like other tracking objects like cookies, can be used for information discovery or questionable information storage. But, if an attack can already access your log files, there are other more serious issues?


What determines the first contact subsystem? A true distributed system could be configured or dynamically configured so that any system could be the first contact system. If so, then each subsystem is creating GIID and passing that GIID to other systems that are themselves creating GIIDs.

One approach to handle this is that a system will only create a GIID if none is present in the incoming request.


For user interfaces, exposing the GIID or parts of it in exception situations can be beneficial:

“We also send the txnId to our user’s in our 500 error messages and the X-Response-Idheader, so if a user reports an issue, we can quickly see all of the related log entries.” —

Compare this to the Hunt The Wampus adventure in enterprises that only have an approximate time of an issue and must track this over multiple systems.


If a giid is part of a support system and as above the ID would be shared with Users in some way, would the value need some form of validity testing? Should it be tested that it is wellformed and include a checksum?

Example crc calculation for a UUID, based on textual version of id:

groovy -e " crc = new; crc.update(UUID.randomUUID().toString().getBytes());println Long.toHexString(crc.getValue())"



Origin ID

An OID uniquely identifies a system in an enterprise. This could be a web server or messaging system. Using a code for the actual system is recommended. Thus, instead of Acctsys, it would be a code, PE65K for example. Using a code is more ‘durable’ than a human readable name.

An extension is to also encode other information in the origin ID, such as application or subsystem identifiers.

This could even reuse various ‘naming’ standards, as found, for example in Directory services such as LDAP.

Unique ID

This ID must not be a business related entity such as user id or account number. The simple reason is that these may occur in the logging record multiple times for different sessions or transactions. For example, user Jean Valjean with account number 24601 may log in multiple times into a web site. Tracking a specific session if a problem occurs is easier if we use a unique ID.

A business level ID may not even be relevant in another system that interacts with the origin point. In one system the ID could be one type of ID, and in the other the same party or user could be identified with a different ID.

Note that as soon as determined, accessed, or generated, a business level ID should be associated with the GIID. This could be a simple log output of that business ID which, since every log output has a GIID, will associate the business ID or value with the GIID.

Similarly when the same process communicates with another system, that system’s unique identifiers and related business IDs will also be associated with the GIID. For example, a web service will take the GIID and relate it to its own ID(s). Now a support engineer can follow the complete path of the system response to a user session.

ID creation

The easiest approach is to use the entry system’s session id created by the server. A potential problem is that this session id is not guaranteed to be unique and ends when the session expires. A UUID solves most problems.

Sample UUID generation in Groovy language:

groovy -e "println UUID.randomUUID().toString().replace('-','')"

If the system ID is 3491 then the above UUID is used to create the GIID and use in logging:

20110227T23:34:37,901; EV={_ID:”34911f788da1ac4a43bb82adb8e61cfcb205″, USERID:”felixthecat”, ….}

Alternative to UUID use?

A UUID is a 32 character string. Could something smaller be used? Perhaps, but eventually the complexity of threaded systems would make the uniqueness constraint of any ID approach a comparable length.

Other approaches are possible. Most of these will incorporate a timestamp in some way. Note that a UUID contains a timestamp.

An example of a ‘unique’ id is used by MongoDB’s ObjectID specification. That spec calls for a 12-byte BSON type of:
• a 4-byte value representing the seconds since the Unix epoch,
• a 3-byte machine identifier,
• a 2-byte process id, and
• a 3-byte counter, starting with a random value.
An example of an ObjectID string representation is ObjectId(“507f1f77bcf86cd799439011”)

Log Framework support for GIID

The use of a GIID is a ‘cross-cutting’ concern. Requiring programmatic use of this ID would be burdensome and error-prone, even if stored in a thread-safe context.

Some logging frameworks support the concept of “nested diagnostic contexts”. This is a way of storing an ID so that interleaved logging is properly identified. See for more information.

Example usage

In a conventional Java server application a JSP or template system would obtain a GIID and insert it into generated pages and associated client side scripts. That GIID would also be stored in the server side session. Since the GIID is stored at the session it is accessible to the various services and components on a server.

This ID is embedded in request to other distributed servers and provides event correlation. Thus the logging systems will have access to the GIID and Client or server side errors can then display or use the GIID for tracking and reporting to assist support engineers.

Since the client also has the GIID, it can display or use this ID for customer service processing.

Of course, this would make more sense if it is a part of a wider Application Service Management (ASM) system.

Standards for IDs

Though many standards specify ID handling, modern architectures, especially web based or distributed, emphasize a stateless protocol. A GIID requirement could be one of those legerdemain stateful practices.

Development impacts

If logging is a deployed feature of an application then it too needs testing. But, since log output is an integration point, it does not fall under the “unit” testing umbrella. There is even some doubt if this should even be tested! Here is one example: Unit Testing: Logging and Dependency Injection
If log files can contain security flaws, convey data, impact support, and impair performance, then they should be tested that they conform to standards. Project management spreadsheets needs to add rows for logging concerns.


Log output can be developer tested using the appropriate XUnit framework, like JUnit.
Mocking frameworks provide a means of avoiding actually sending the output of a logger to an external ‘appender’. “Use JMockit to Unit test logging output”.
In development of a project, the log output changes rapidly as the code changes. Selecting where in the software development life cycle (SDLC) to test logging or even specify what logs should contain is difficult.
One approach is that the deployed system will not do any app logging that was not approved by the stake holders. These must be “unit” tested, and all development support logging is removed or disabled except for use in a development environment.


There is no need to change every subsystem to use this log tracking approach. If the GIID is created somewhere in the “path” of a process or service, it adds value. Other systems can gradually start to use a tracking ID. Thus, the tools and training to use this capability can also be progressively introduced.

About this post

I was going to title this article ‘Logging in Anger’, as a response to my own experiences with application logging. Alas, there are so many issues that I had time to only focus on one as a result of a recent stint supporting an application that exhibits the typical logging anti-patterns. Example: it’s bad to get a null pointer exception, but to not know which argument to a function caused this?


Structured Logging

(this article was going to add more info on incorporating a GIID into a Structured Logging framework. This section is here for refernce)
Structured Logging is a type of app log file that is data based rather than prose based. Thus, it is machine readable and amendable to high-level tools, not just a text editor.

Treating logs as data gives us greater insight into the operational activity of the systems we build. Structured logging, which is using a consistent, predetermined message format containing semantic information, builds on this technique …. We recommend adopting structured logging because the benefits outweigh the minimal effort involved and the practice is becoming the default standard. —

An example, is a system where the log output uses a predetermined message format. An overview of such systems is found in chapter 5 of “Common Event Expression”,

Note this should not be confused with a similar sounding technology called “Log-structured file system”.

Further reading

  1. ivot Tracing: Dynamic Causal Monitoring for Distributed Systems
  2. Dapper, A Large Scale Distributed Systems Tracing Infrastructure
  3. Log management and intelligence,
  4. Logging a global ID in multiple components,
  5. Application Service Management (APM) system
  6. Application performance management,
  7. The art of application logging,
  8. Patterns For Logging Diagnostic Messages,
  10. How to test valid UUID/GUID?
  11. Log Data as a Valuable Tool in the DevOps Lifecycle (and Beyond),
  12. OWASP – Logging Cheat Sheet,
  13. How to Do Application Logging Right,
  14. Request for comment Structured Logging,
  15. 6 – Logging What You Mean: Using the Semantic Logging Application Block,
  16. A Review of Event Formats as Enablers of event-driven BPM,
  17. Basic Android Debugging with Logs,
  18. Mapped diagnostic context vs Nested diagnostic context,
  19. Building Secure Applications: Consistent Logging,
  20. Log for machines in JSON,
  21. Logging Discussion,
  22. CEE,
  23. CEE is a Failure.,
  24. Centralized Logging Architecture,
  25. Centralized Logging,
  26. Logging and the utility of message patterns,
  27. Payment Application Data Security Standard,
    Payment application must facilitate centralized logging.
    Note: Examples of this functionality may include, but are not limited to:
    • Logging via industry standard log file mechanisms such as Common Log File System (CLFS), yslog, delimited text, etc.
    • Providing functionality and documentation to convert the application’s proprietary log format into industry standard log formats suitable for prompt, centralized logging.
    Aligns with PCI DSS Requirement 10.5.3
  28. NIST 800-92 “Guide to Computer Security Log Management”,
  29. UniqueID Generator: A Pattern for Primary Key Generation,
  30. java.util.logging,

Set Java console output to "fine" logging level

While coding a Java module I used java.util.logging output at the INFO level in the code to help development. This info is useful only during development. Are these output statements removed when development is done?

The correct answer is that they should have been logged at the correct level, “debug“, or in java.util.logging’s NIH design, the FINE level. Thing is, the default configuration for console output blocks the FINE level. So, you have to configure the project’s logging appropriately.

This task falls under the develotrivia (related to administrivia) category, the endless tasks that should be easy but draw attention from writing code.

Fortunately I stumbled on a blog post by Michael R. Head that gives an easy way to configure the logger. Essentially it just creates a config string with text that normally would be put in a properties file. I took his example code and created a method that is invoked in the @Before method. Better yet, do this in the @BeforeClass method.

 * Based on code by MR Head
 * @author J. Betancourt
private void initLogging(){
    String config = "\n" + 
       "handlers = java.util.logging.ConsoleHandler" + "\n" + 
       ".level = ALL"+"\n"+
       "java.util.logging.ConsoleHandler.level = FINE" + "\n" +
       "com.sun.level = INFO" + "\n" +
       "javax.level = INFO" + "\n" +
       "sun.level = INFO" + "\n" +

	InputStream ins = new ByteArrayInputStream(config.getBytes());

	Logger logger = Logger.getLogger(this.class.getName());
	try {
	} catch (IOException e) {
	     "Log manager configuration failed: " + e.getMessage(),e);

	logger.fine("Logger initialized");    

Example Use

public void setUp() throws Exception {
	graph = new Graph();

BTW, it is not difficult to configure the logger, just point to a configuration properties file using a JVM env variable setting, or point to a log manager class (I tried this, doesn’t work). I think Michael’s method is easier, especially for test driven development (TDD).


Work on 'Common Event Expression' stopped

MITRE has stopped all work on CEE —

MITRE is willing to pass this effort forward, and allowing further development:

“MITRE is open to transition opportunities for CEE — including transferring all CEE specifications, documents, source materials, etc.; transferring all CEE-related intellectual property rights; and pointing this website to a new hosting location — to an organization, group, or individual willing to continue logging standards development in a philosophy similar that of the CEE community.”

The Logging Problem
12/19/2014: Just learned that the general term for these kinds of efforts is Structured Logging.

If you look at the efforts for a logging or event standard, you’ll find many. This is both good and bad. Perhaps, this effort really belongs in an Open Source initiative?

In this paper, the problem is outlined and past efforts are discussed. The list of past efforts are:

  • CEE (now)
  • CBE
  • CEF
  • CIDF
  • SDEE
  • WELF
  • XDAS

A good analysis of the event formats is given by the authors Jörg Becker, et al [1]

Further reading

Changing Java Logging Levels At Runtime

How to control Java Logger runtime levels? There are many approaches and each logging framework has their own peculiarities. This information is readily available, but surprisingly not many examples are given.

[This is a repost; somehow it got lost]

Here I show a programmatic example using the java.util.logging library; Log4J, SLF4J, and others have similar capability. In listing 1 below, I create three named loggers and two threads.

The first thread will loop and randomly pick a logger and a specific logging level to log a message. If it is blocked, not meeting the current threshold level, another message is output.

The other thread will randomly set a threshold level to one of the created loggers (including the root logger).

Example output

Starting logRunner.
Starting managerRun.
[logger3]: blocked: 'FINEST' 
******* Setting logger "global" to level "WARNING" 
[logger2]: blocked: 'FINER' 
[logger2]: blocked: 'FINE' 
Feb 27, 2011 11:34:25 AM Main2$1 run
SEVERE: [logger2]: not blocked!
Feb 27, 2011 11:34:27 AM Main2$1 run
INFO: [logger3]: not blocked!
Feb 27, 2011 11:34:28 AM Main2$1 run
SEVERE: [logger3]: not blocked!
[logger3]: blocked: 'ALL' 
[logger2]: blocked: 'FINER' 
[logger3]: blocked: 'FINER' 
[logger2]: blocked: 'FINER' 
******* Setting logger "global" to level "ALL" 
[logger3]: blocked: 'FINE' 
[logger2]: blocked: 'FINER' 
[logger2]: blocked: 'FINER' 
[logger2]: blocked: 'FINER' 
Feb 27, 2011 11:34:37 AM Main2$1 run
WARNING: [logger2]: not blocked!
Feb 27, 2011 11:34:38 AM Main2$1 run
INFO: [logger1]: not blocked!
[logger2]: blocked: 'FINE' 

If your IDE can, this is easier to follow using console colorization. On Eclipse, Grep Console is great; I used the expression, .*blocked:.*, to highlight the blocked lines.

Listing one

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Random;
import java.util.concurrent.Executors;
import java.util.logging.Level;
import java.util.logging.LogManager;
import java.util.logging.Logger;
import java.util.logging.LoggingMXBean;
import static java.lang.System.*; 

 * Simple example of runtime Logger level setting.
 * @author jbetancourt
public class Main2 {
	public static Random random = new Random(); 
	public static Object lock = new Object();
	public static List<String> loggerNames;
	public static List<Logger> loggers;
	 * Create two threads to illustrate concept.  
	 * One thread will continually choose a logger and a level to log a message.  
	 * The other thread will randomly set a random logger to a random threshold level.   
	 * The first thread will log at a level or show that it is blocked
	 * @author Josef Betancourt
	public static void main(String[] args) {		
		loggers = new ArrayList<Logger>();
		final LoggingMXBean mxBean = LogManager.getLoggingMXBean();
		final List<String> loggerNames = mxBean.getLoggerNames();
		final List<Level> levels = Arrays.asList(
		Runnable logRunner = new Runnable() {		
			public void run() {
				System.out.println("Starting logRunner.");
					Level levelToUse =levels.get(random.nextInt(7));
					Logger logger = loggers.get(random.nextInt(loggers.size()));
						logger.log(levelToUse, String.format("[%s]: not blocked!",
						out.println(String.format("[%s]: blocked: '%s' ",
					try {
						synchronized (lock) {
					} catch (InterruptedException e) {
						// don't care.	
		Runnable managerRun = new Runnable(){
			public void run() {
				System.out.println("Starting managerRun.");
					Level level = levels.get(random.nextInt(7));					
					String logName = loggerNames.get(
							"******* Setting logger \"%s\" to level \"%s\" ",
							logName, level.getName()));
					mxBean.setLoggerLevel(logName, level.getName());
					try {
						synchronized (lock) {
					} catch (InterruptedException e) {
						// don't care

} // end class Main2

This is probably not a good example of threading code. Wasn’t the point.

Dec 21, 2013: Fixed link to the the logging overview API page.

Further Reading

Diagnostic Differential Logging with JavaScript Example

Reducing the quantity of information to what is essential is very important in maintaining complex software. Modern development environments have debuggers and other tools to assist in this effort: conditional breakpoints, watch expressions, logging, stop on change techniques, and so forth.
As a supplementary technique an approach to debugging is shown which uses the dynamic differences in a running system to output a reduced level of information. A simple programmatic implementation is shown in a JavaScript scenario, but the technique is applicable using other languages and automated techniques.

KEYWORDS: logging, diff, debugging, diagnostics, BIST, JavaScript, AOP, BIST

Recently I was debugging a legacy JavaScript program. I used the outstanding FireBug plug-in in the FireFox browser. By the way, FireBug is truly indispensible, such a wealth of features and it is FOSS. Now combine that with Selenium which gives you the ability automate the browser and you have a powerful Integrated Development Environment (IDE). All that could be lacking is time and skill.
Unfortunately, I was not finding the problem. Later at home I got an idea. What if I could simply log what changed? That way I could more clearly see how the program behaves. All programs are just obfuscated state machines. Through a use-case or scenario, state variables are being manipulated and non-state data is being changed. Correlating these two with the events that drive the system, is a prime aspect of debugging.

• You want to log what is happening in a system, but you don’t want to be inundated with non-useful data.
• You or someone else want to be able to quickly resume this investigation in another situation or project.
• By saving ‘interests’, a Built-in-self-test aid can be implemented.
• During development hooks can be put in place to provide differential logging.
• You want to capture the live system not a system which is controlled by a debugger.
• You cannot install new systems, tools, or frameworks to help.
• Breakpoints will break as the system is maintained.

My solution was to dynamically log the differences. I’m sure this is not new. However, it probably is not explicitly discussed. The usual approach is to log as much as possible, but since this is prohibitive, instead the logging is limited, level filters, and contexts are used to control the output. Breakpoints and watches while usually enough to do the job are in the long term brittle since the target program may change during the dev and debug process. Thus, dynamic difference reporting would be a better approach in this situation.


There are a few implementation choices.

One is to declare the data of interest. Then the logging system will track these and only output anything when there is a change in any of the data. With this approach the logger must be triggered either manually or automated. Manually is very easy; at an event change or at enter and exit points of a module or function, the differential logger is invoked.

To avoid changing the source code (it may not even be available), AOP could be used to implement the above. For example, an Around advise can be used to differentially trace the changes in methods.

Framework Plugin
If the system under test is running as part of a debug or is instrumented, it may already have a facility to capture change events. For example, FireBug, the FireFox plugin has this and the FireBug plug-in, FireDiff, implements a change monitor and visualization output. There may be or should be a comparable plug-in for other systems like the Eclipse IDE.

Code Example
To illustrate this concept it is used to debug a web page’s DOM changes via JavaScript. The page has a button that when clicked will randomly change two input field’s content. Notice that when the change involves just an ordering of two adjacent characters in a field, it is not readily apparent. The Differential logging makes it explicit. The log output is just the result object converted to JSON. An example: {“event”:”run”,”record”:{“foo2”:[“”,”accd”],”foo4″:[“”,”ollrr”]}}. A real implementation would do otherwise.

To specify which DOM nodes to monitor, a JavaScript object is used:


The “get” field is used to specify how to get the value from the browser DOM. Another method would be “innerHTML” access. A better approach is to detect the appropriate ‘getter’ via using the node type.

demo screen capture

Listing 1, implementation

 * Differ constructor.
 * @param data which nodes to record
 * @param formatter how to show results
 * @returns
function Differ(data, formatter){ = data;
	this.formatter = formatter;
		this.formatter =this.format;

 * Data and methods of class.
Differ.prototype = {
  MAXSIZE : 15,
   * Detect changes and save latest.
   * @param event Where was diff created
   * @returns result object
  diff : function(event){
    var results = {};
    	return results;
    	results = {"event":event};
    results["record"] = {};
    var record = results["record"];
    for(var key in data){			
      var ref = document.getElementById(key);			
      var domValue = "";			
      switch (data[key].get) {
        case "v":
          domValue = ref ? this.shorten(ref.value) : "";
        var prior = data[key].value ? data[key].value : "";
        if((domValue != prior)){
          record[key] = [prior, domValue];
          data[key].value = domValue;
    } // end for	
    return results;

   * Convert results to message.
   * @param results object
   * @returns JSON
  toMessage : function(results){
    var obj = results;
      obj ={"error":"results is undefined or null"};
    return this.format(JSON.stringify(obj));
   * Get size of an object
   * @param obj
   * @returns {Number}
  size : function(obj){
    var len = 0;
    for(var key in obj){
    return len;
   * Limit size of value.
   * @param value
   * @returns
  shorten : function(value){
    return (value.length > this.MAXSIZE) ? 
   * Make message human readable.
   * @param message
  format : function(str){
  	return str + '<br/>';

Listing 2, Demo script

var data = {
  "foo2" : {
    "get" : "v"
  "foo4" : {
    "get" : "v"

var differ = new Differ(data);
differ.diff(); // prime values

function runIt() {
  var ref2 = document.getElementById('foo2');
  var ref4 = document.getElementById('foo4');
  if(Math.floor(Math.random()*101) > 60){

  var msg = differ.toMessage(differ.diff("run"));
  document.getElementById('output').innerHTML += msg;
  document.getElementById('output').innerHTML += 
    Array(60).join("-") + "<br/>";

function scramble(ref2, ref4){
  var ch = ref2.value.charAt(1);
  var r2v = ref2.value;
  ref2.value = replaceChar(r2v, 2, ch);
  ref4.value = replaceChar(ref4.value, 2, ref4.value.

function swapInputs(ref2,ref4){
  var temp = ref2.value;
  ref2.value = ref4.value;
  ref4.value = temp;	

function replaceChar(str, ndx, v) {
  return str.substr(0, ndx) + v + 
    str.substr(ndx + v.length);

Listing 3, the demo web page

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" 
  <meta http-equiv="Content-Type" content="text/html; 
  <link rel="stylesheet" type="text/css" href="style.css" />	
  <script type="text/javascript" src="DebugUtils.js"></script>	
  <script type="text/javascript" src="DemoDiff.js"></script>	
  <span class="mytitle">Differential Logging Example</span>
  <div style="width:600px;">
	<div class="box inputBox">		
		1: <input type="text" id="foo1" value="xyyz"/>
		2: <input type="text" id="foo2" value="acbd"/><br/>
		3: <input type="text" id="foo3" value="wzlm"/>
		4: <input type="text" id="foo4" value="olxrr"/>

          <span class="buttonClass">
	    <input type="button" id="run" value="Test" 
	<div id="output" class="box inputBox"></div>
	<span style="float:right;margin-right:10em;font-size:xx-small;">
           &copy; Josef Betancourt</span>		

Listing 3, css

body {
  background-color:#E6E6E6;margin:3em 8em 12em 3em;
  border: 3px solid #D4D4D4;padding:8px;
  padding:2em 1em 2em;margin:10px 5px 20px;

Related Patterns

Related Intellectual Property

Further Reading

Debuggers are for Losers

Use JMockit to Unit test logging output

A very simple method of unit testing the output logging of an app is shown using state-based mocking approach and JMockit toolkit.

A very simple method of unit testing output logging is presented using a state-based mocking approach and JMockit toolkit.

Java language. Logging frameworks such as Log4j, commons-logging, java.util.logging, and SLF4J. Application logs.

Unit test logging output? Isn’t that going overboard? Most likely. However, there can be some reasons why in certain parts of a code base you’d better:

  1. Audit requirements
  2. Use of Structured Logging.
  3. Logging adheres to standards such as Common Event Expression (CEE) language. (Note, CEE has been cancelled.)
  4. Ability to maintain a system
  5. Log maintenance tools get correct data
  6. Reduce technical debt
  7. SIEM

In listing 1, a simple class uses the java.util.Logger to log. We want to make sure this class will always log this in the future, i.e., a regression test. In order to qualify as a unit test, the system under test (SUT) should be isolated. Thus, parsing an actual logging output file would not be optimal.

Listing 1

" java.util.logging.Level;
" java.util.logging.Logger;

/**  Example class that logs. */
public class Service {
	Logger logger = Logger.getLogger(this.getClass().getName());
	public void serve(){		
		logger.log(Level.INFO,"Hello world!");

In listing 2 we use JMockit to mock the java.util.Logger. This should also work for other logging frameworks. JMockit has two approaches for applying mock techniques: Behavior-based and state-based.

We apply state-based below (just cause that is the one I’m starting to get the hang of). The Arrange, Act, Assert (AAA) pattern is still used, but the Assert step is in the mock object. We apply a simple ‘equals’ test. Of course, based on what we expect in the log message, a regex may be more useful.

Listing 2

import static org.hamcrest.CoreMatchers.*;
import static org.junit.Assert.*;

import java.util.logging.Level;
import java.util.logging.Logger;

import mockit.Mock;
import mockit.MockUp;
import mockit.Mockit;

import org.junit.After;
import org.junit.Before;
import org.junit.Test;

 * JUnit test for Service class.
 * @author jbetancourt
public class ServiceTest {
	private Service service;

	public void setUp() throws Exception {
		service = new Service();
	public void tearDown() throws Exception {

	public final void we_are_logging_correctly() {	
           // Arrange

           // Act

	private void mockLogger() {

		new MockUp<Logger>() {
			public void log(Level level, String msg) {
				assertThat("Hello world!", is(equalTo(msg)));
				assertThat(level, is(equalTo(Level.INFO)));

The above technique just tests that the log parameters are correct. This doesn’t check that the log output to file itself is correct. That is a different concern. Since the actual output logging is controlled by various configuration options, a unit test may not make sense. Would that be a functional test?

Of course, just checking that the message sent to the logger may not be enough. In this case a possible approach is to hook into the logging library to capture the final log output. In the java.util.logging API, one can add a new stream Handler to the Logger instance being used.

In listing 3 below a simple attempt is made to use a stream handler to capture the actual logger output used by java.util.logging.

Listing 3

public class ServiceTest{
  Logger logger = Logger.getLogger(ServiceTest.class.getName());
  private OutputStream logOut;
  private StreamHandler testLogHandler;

  public void setUp() throws Exception {

  /** */
  public final void exception_log_has_all_info(){
    logger.log(Level.WARNING, "Hello world!");
    String captured = logOut.toString();
    Assert.assertTrue(captured.contains("Hello world!");

   Add stream handler to logger.  
   Will take more effort then this, e.g., may not have parent handler.
  protected void setUpLogHandler(Logger logger) {
    logOut = new ByteArrayOutputStream();
    Handler[] handlers = logger.getParent().getHandlers();
    testLogHandler = new StreamHandler(logOut, handlers[0].getFormatter());

* JUnit: 4.*
* JMockit: 0.999.11
* JDK: 1.6*
* Eclipse: IDE 1.7
* Git: 1.76.msysgit.0

Shown was a little technique that may come in handy one day. Though presented in the context of logging, it is really a simple application of state-based mock use.

Further reading

Carlos Santana/Mahavishnu John McLaughlin – The Life Divine