Pondering via the web

This morning got up early and sat at my desk while having a coffee. I got multiple windows up on my multi 24″ monitor setup and my adequate quadcore Windows personal computer. Perused the latest news and tech stories.

Then I opened my Feedly account and looked at my Science feeds. “The Reference Frame” has a new post. I like that blog. Most of the time I don’t know what he is writing about, and the anti-climate change diatribes sound like Fox News, but in all it has some tidbits for us science aficionados. Like today, he is discussing something about intelligence. Which leads me to recall that there was another attempt to present an “equation” of intelligence by another sciencecy type, Bart Kosko. Bart was discussing creativity. Creativity is the measure of the number of results or responses to the same number of data or stimulus, or something like that.

So, I searched for the Kosko reference. Couldn’t find it. I did find this quote, which is very good: ““Whenever the invisible hand isn’t operating the iron fist is”” — Bart Kosko. Then I sidetracked to the Subsethood theorem. Then to “The Sample Mean“.

Which of course, I now had to browse the web to make sure I really knew what the sample mean is. This led to views of various sites and of course Wikipedia (which led to all sorts of side tracks into other things).

Anyway, back to the sample mean. While reading up on it, I thought well (not listing the paths that led here) if the standard deviation is so weak when the data has huge outliers, why not do the same computations using the Median measure? I ran Excel and did a few. Not bad. If data is pretty ‘regular’ it is close to the STD. Hmmm. Maybe this should be part of statistics? Well, more browsing and of course it turns out that it is. It is called the Median Absolute Deviation (MAD). MAD can even be added to Excel.

Well, that was an hour. Not wasted. Certainly better than watching TV. Now back to the task at hand, working on something that may make a good Kickstarter project.

More links

Is Google creating an alternative to RSS?

Google dropped Reader, and someone reported that “Google Alerts” RSS feed is not working. Maybe Google is working on replacing the RSS or Atom web feed technology so that it would work more in line with Google+, and like Facebook, create a “walled garden.”

We know Google does some great work correcting or improving existing approaches, for example, ‘Protocol Buffers‘. Syndicated feeds are ubiquitous if lately on the downswing, perhaps time for fresh thinking on the approach. Certainly modern web technology has much more powerful stuff like XMLLHttpRequest, Server-Sent Events, WebSocket, and WebRTC.

Further reading

How to Measure User Interface Efficiency

My frustration level reached a peak while using a mobile phone.  So, again, I’m thinking about GUI design.  Why are the interfaces so bad and how to fix them?

First step is just figuring out how to measure the badness.  There are plenty of UI measures out there and many papers on the subject.  BTW, I’m just a developer grunt, coding eight hours a day, so this is out of my league.  Yet, the thoughts are in my head so ….

To get to a goal takes work.  In physics, W = Fd.  Work equals force times distance.  No direct correlation to user interface.  But, what if W is equal to user interface element activated times number of possible objects to act upon, i.e., W = U x O.  Work equals UI force times number of options.  This ‘force’ is not a physical force or pressure, of course.  It is a constant mathematical value.

Example, you click on a button and then you are confronted with a choice of five options.  Lets say you are reading a web page and you want to share it with someone.  This takes too much work, way too much.  Even getting to the sharing choice is monstrous; click the menu button, click share, find which method of sharing, get to contacts app, blah blah.

So, here is what we have.  Activating a user interface element is a force; each type of element is given a constant value, a button is 10, a scroll bar is 100, and so forth.   The number of options that results and is relevant toward the end goal is the ‘distance’.

Now you divide this resulting value by how much time it took you to get there and you have Power.   P = (U x O)/T. (Update 7/26/2013: Probably a better dimension is actual distance of pointer movement or manipulations).

Add these up for each step in completing the goal and you have a metric for an interface user story.

Why use the number of options for distance?  The number of options presented to the user is stress. Kind of related to Hick’s Law, “The time to make a decision is a function of the possible choices he or she has”. If computers and software were not in the 1960s (face it modern stuff is just fancy screens) they would know what the hell I want to do.

A follow up post will give the solution to this User Experience Design (UXD) or Interaction Design (IxD) problem, and the solution is actually pretty easy.


Created the follow up:  Proactive Interface

Related Posts


Why don't browsers have up and down buttons?

Browsers have a forward and back button. But, when you land on a site from an external link you sometimes want to, for example, go up the URL path to higher ‘folders’ within the site.

Browsers have a forward and back button. These help you navigate within your recent browser history. But, when you land on a site from an external link you sometimes want to, for example, go up the URL path to higher ‘folders’ within the site.

Example, your at http://somewhere.com/land/animals/waterbuffalo.html. How do you get to the land folder? Maybe there is something there, or how do I get to somewhere.com itself. The answer is you click on the address bar and edit the URL. Yuck! What is this, the 1800’s? Plus on any kind of smartphone this is pain to do. Plus, did you ever try to guide over the phone a non-compute savvy person to navigate on a site by editing a URL? Even more painful.

The old Google add-ons for various browsers used to have a page up widget. On FireFox I now use “Dir Up”.

Dir Up

Did JavaScript wipe out the dinosaurs?

Imagine, a very dangerous vulnerability is discovered in Javascript, and people have to turn it off in the browser (via configuration or plugin). This could come from a pimply hacker having fun or part of a multi-front cyberwar attack.

Imagine, a very dangerous vulnerability is discovered in Javascript, and people have to turn it off in the browser (via configuration or plugin). This could come from a pimply hacker having fun or part of a multi-front cyberwar attack.

What if this lasts a few days? Are your customers not able to use your site or web app? Imagine the loss of GDP as commerce grinds to a halt, the empty stares as people can no longer play games or chit chat online, if families financial resources are not available. And so forth.

Yes, this is exaggeration. No company would make their web sites non-accessible if JavaScript is not available, right? Unfortunately, there probably are a few. Certainly many social web sites would not seem so social if they could not provide the features they do.

Why JavaScript?
This is understandable. JavaScript is a great language and it enables highly interactive web applications. No longer do web pages have to be “paged” in or out, since with technologies such as AJAX, fine-grained architectures are possible. Validation, more focused forms, and easier to use applications are all powered by the evolving “2.0” web stack.

Other reasons for disabling Javascript
The problem is not only limited to the opening scenario. There are a myriad of reasons why someone may disable JavaScript. In a discussion of “Hash URIs”, we find this:

º users who have chosen to turn off Javascript because:
    – they have bandwidth limitations
    – they have security concerns
    – they want a calmer browser experience
º clients that don’t support Javascript at all such as:
    – search engines
    – screen scrapers
º clients that have buggy Javascript implementations that you might not have accounted for such as:
    – older browsers
    – some mobile clients

The most recent statistic I could find, about access to the Yahoo home page indicates that up to 2% of access is from users without Javascript (they excluded search engines). According to a recent survey, about the same percentage of screen reader users have Javascript turned off.

This is a low percentage, but if you have large numbers of visitors it adds up. The site that I care most about, legislation.gov.uk, has over 60,000 human visitors a day, which means that about 1,200 of them will be visiting without Javascript. If our content were completely inaccessible to them we’d be inconveniencing a large number of users.

— “Hash URIs“, Jeni Tennison.

While I don’t think 100% fallback to non-JavaScript web pages is possible or desirable, companies should be aware of the possible threats. Thus, every site or web app should have a minimum set of functionality exposed via non-scripted pure HTML (and CSS?). For example, in a financial resource, a customer should be able to query their balance without all the fancy script tricks; “just the facts ma’am”.

When to use JavaScript
Jakob Nielsen gives a good starting point for how much web 2.0 (which is enabled by JavaScript) should be used in various types of sites:

As an extremely rough guideline, here’s the percentage of Web 2.0 infusion that might benefit different types of user experience:

Informational/Marketing website (whether corporate, government, or non-profit): 10%
E-commerce site: 20%
Media site: 30%
Intranets: 40%
Applications: 50%

— http://www.useit.com/alertbox/web-2.html

Further Reading
Web 2.0 ‘neglecting good design’


Hash URIs

Hacker group vows ‘cyberwar’ on US government, business

Walking in others shoes: Turn JavaScript off for a day

AJAX Vulnerabilities: How Big the Threat?

Web 2.0 Can Be Dangerous…

BlackBerry users urged to disable Javascript after web browsing vulnerability revealed

Apple Safari window object invalid pointer vulnerability

"Well, this is embarrassing"

I’m seeing this more often now. FF 4.0.1 has a problem. It happens in any site with rich media, like Netflix.

“Firefox is having trouble recovering your windows and tabs. This is usually caused by a recently opened web page.”

I’m seeing this more often now. FF 4.0.1 has a problem. It happens in any site with rich media, like Netflix. Some people are blaming this on plug-ins, Flash, and other things. I don’t know, FF 3.6 never had this problem with the same sites. Come to think of it, neither did 4.0.

Of course, it could be something else. I looked in Windows Event Viewer but did not see anything relevant.

Just did a Windows update, maybe something will shake out.

Further Reading
Google search for 4.0.1 crash

Tab Docking via Multiple Tabbed Browser Interface

Thus, on a large display there is plenty of room for showing multiple pages. This is where Multiple Document Browser Interface comes in.

Yea, the world is going tiny displays (for movies??? nuts), but for real productive use, multiple very large monitors are required.

Many web pages, however, won’t take advantage of large displays. They use fixed absolute sizes for compatibility and most common display resolution. Thus, on a large display there is plenty of room for showing multiple pages.

This is where a Multiple Tabbed Browser Interface comes in. If it were around (I just made up the term) I could just drag a tab and dock it on the edges of the browser, reusing the large monitor space for showing multiple web pages at once. Great for web development. Sure I could open up multiple browsers, and I do, but ….

This is not the same as just having multiple tabs available, wherein one can switch to different loaded web pages or apps. That just allows single-document views. In web development, having multiple docs visible (and hopefully fancy stuff, like entangled scrolling) would be ideal.

Note that in traditional Multiple Document Interface (MDI) windows are very flexible and can lead to complexity. I’m just referring to docking behavior. The Usability arguments against MDI don’t apply to MTBI.

I wrote the above without looking to see if this is already available. It’s not on FireFox, Chrome, and IE9, that I’m aware of.
Apparently, one browser does do something similar (but not different documents, just the same split document) this:

The Konqueror browser (available for the K Desktop Environment on Unix and Unix work-alikes, such as Linux) supports multiple documents within one tab by splitting documents. In a Konqueror tab, documents can be split horizontally or vertically, and each split document can be re-split.

Further Reading
Tab (GUI)
Multiple document interface
Comparison of document interfaces

Enough with the lightboxes!

The web has an architecture and style, click a link get a page. Simple. Lightboxes don’t fit in.

The web has an architecture and style. Click a link, get a page. What could be simpler? Lightboxes don’t fit in.

This morning before leaving for work I check the news. Oh, a new version of something. Go to the page, click a link and boom, a lightbox. I’m starting to see more and more of this usage. And, it is not good.

Lightboxes like any technology has appropriate use cases. For rich media, like photos, streaming, and detailed views of a product, a lightbox can be ideal. It is also useful to enable complex focused interaction transaction (a dialog) in web pages or RIA.

However, the web’s “idiom” is that after I click a link, I decide by using the available links or browser facilities what to do next. I decide. It is called REST. It allows re-usability and transformation. A lightbox, as usually implemented, doesn’t even let you resize them, all you get is a close button (sometimes even hard to identify). In the past a lightbox or other modal technique was just a signal to kill the browser (alt-F4 on Windows, btw) in case something malicious got into the system or you innocently landed on a naughty site.

I thought I was alone in my growing distaste, but here is one fellow lightbox doubter. He makes great points and gives the mobile perspective.

Why is lightbox use spreading? One reason is that tools make it much easier to create them, and so any script kiddie can gush with pride on his latest anti-usability creation. Another is that they are a sneaky marketing device to keep you on the current page no matter what.

Time for lightbox blockers? I don’t think so, but …

Other uses
Windows 7, as part of its security process, will now use a “lightbox” like technique. The dialog prompt will take focus, and all the other desktop areas will darken. Here, I think it is a good use-case.


  • Has legitimate uses, especially for media or business processes.
  • Breaks the look and feel of a site.
  • Usually cannot be resized. Scared of a real closeup of a product?
  • Cannot be easily linked to.
  • Confuse non-expert or casual users.
  • Doesn’t follow RESTful architecture.
  • Cannot be repurposed.
  • But is better then that ole Javascript alert’s ugly message box?

My company uses lightboxes. However, I think it is an appropriate, expert, magnificent, enlightened, and wise use. … Just in case my boss reads this post. 🙂

jsUnit based tests won't run in other folders?

A fix for jsUnit tests not running in external folders.

This is why software is such a sometimes frustrating undertaking. I once had a jsUnit based test working. It ran fine in FireFox. Today it did not. Funny thing is that the existing tests that come with jsUnit still work. Other browsers fail too. (maybe).

It had to be a typo, bad url, or something. The jsUnit docs say that the problem is that I am not correctly giving the path to the jsUnitCore.js file. I tried everything. But, now I try to get smart. First copy the tests that are working to another folder. I copy failingTest.html to /temp/jsunit, and I also copy jsUnitCore.js there too.
Still doesn’t work.

I edit jsUnitCore.js, and add an alert to show that it is being executed or loaded. To make sure I really did it, I diff:

C:tempjsunit>diff failingTest.html javajsunittestsfailingTest.html

<     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<     <script type="text/javascript" src="./jsUnitCore.js"></script>
>     <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
>     <link rel="stylesheet" type="text/css" href="../css/jsUnitStyle.css">
>     <script type="text/javascript" src="../app/jsUnitCore.js"></script>

C:tempjsunit>diff jsUnitCore.js javajsunitappjsUnitCore.js

< alert("Maybe a career in sanitation?");

Try the test and I see the message. So, I know that the library is being found and is running. That is weird. Must be the browser. I empty the cache. still nothing. Arrrrr.

Google? I find a reference to jsUnit forum about someone having a similar problem. No advice, but one person gives a url for the solution. Is it a good url, spam, or worse? Nope its legit, and real advice. It is the browser. Thank you Andrew! I should have checked the forum first.

Browsers, bah! I hope they don’t start showing up in embedded systems. That’s probably how WWIII will get started, some Javascript mistyping.

17JAN11: Spoke too soon. FireFox ver 4.0b9 is showing the problem again. I’ll try in my Ubuntu instance too.

JSUnit 2.2 on Firefox 3.07 – FTW


FireFox 4 beta 7 is fast

FireFox 4 beta 7 is fast when it uses hardware graphics acceleration.

FireFox 4 beta 7 is fast when it uses hardware graphics acceleration.

On my system it detected:

  • ATI Radeon HD 5600 Series
  • Direct2D enabled: true
  • DirectWrite Enabled: true
  • GPU Accelerated Windows: 2/2 Direct3D 10.

To find out if it detected yours, put about:support in the address bar and go to bottom of resulting page. This blog post has more information.