If an application is showing a progress bar of some kind it means it is doing something. If that something is doing some work not just waiting for a network response, the app should indicator the current work it is doing.
Instead some apps use an “idiot light” approach. Case in point. My PC crashed. Startup Repair is running. All it shows is a progress indicator, not one based on completion status or time left, just a blue rectangle cycling by. Its been running for hours. Is is still doing anything, is it hung, is there hope?
Sure, for many apps, actual work status output is redundant and not useful to the “average” user. So when should more information be shown? When the elapsed time is over some threshold. Or if a user wants more information, they can signal that to the app. Many applications use this approach. Why something so fundamental as repairing disks doesn’t do this is very puzzling.
The same dialog box is used in other parts of Windows 10, like when creating a Restore Point, so we still have the same “idiot light” User Experience Design (UxD).
The prevailing method of configuring an application is to run the application and then select various configuration screens or preferences. This is fine, until it is not.
Sometimes you cannot run the application, so can’t configure it. An example would be the Eclipse IDE. It is possible to break Eclipse by loading a misbehaving plug-in or other means. If Eclipse cannot start, one can’t remove the offending plug-in or make other required changes.
This brings up the main problem with relying on in-app configuration. Without its use, one must be skilled in the underlying configuration storage used by the app. Further, this storage may be minimally documented, complex, or in multiple places. In worse cases, this storage is non-textual. For instance, referring to Eclipse example above, I have yet to find via a web search how to easily use the OSGi Equinox run-time (that Eclipse builds upon) to remove a recently installed feature or plug-in. I’m sure it is possible, and probably minor if you know Equinox. But, how many users of Eclipse IDE have even heard of Equinox or OSGi?
The app may be compromised or broken.
Configuring via configuration files or other apps requires a different skill set.
Attempts to repair via non-app UI can make things worse.
Allow a subset of an application to be used for configuration use.
Allow the running of the app in “safe” modes. Example, Browser without plugins.
Create a separate configuration application.
Allow easier means to roll back to previous configuration.
The best solution is to create a secure application that can use the target application’s configuration storage systems. If the in-app configuration support can be modularized, this is optimal.
Currently I have to share info via capturing screen shots of various tools. An example of where this is required is in Eclipse’s Java developers environment.
The Eclipse IDE via the various added plugins and features captures a lot of metadata about a project and its various assets. For example, in the outline view you can see various listings with applied filters. Now try to share that in an email or document. You have to take a snap shot of the screen. This is not a very practical example, it’s just to show the issue. This issue comes in various other plugins like Team providers. Note, I’m not singling out Eclipse on this; all tools and applications have this problem.
While screenshots can convey the original purpose of sharing a particular view of data, they are very difficult to reuse in diverse ways. For example, we may want to sort or filter a listing of data. Or we we may want to reuse that data with external reporting or metric applications. With a GUI screen shot this is not possible.
Graphical tools should allow piping of information. As in Unix piping, a tool should allow re purposing of its generated data. This is not just a developers geeky need; many times error pop ups and other types of displays do not allow the end user to copy the actual ASCII textual information.
There are many ways of doing this. At root, the two options are textual piping, as used in *nix systems, and object piping, as used in PowerShell.
The ideal solution would allow a drag&drop functionality. This is already used in many apps via OS or application level support. For example, right click on the browser and you can copy stuff. Yet, even in the browser scenario, the data result is not semantic (based on the information context), it’s just text or via the contextual menu a set of standard objects.
One possibility is that a drag&drop sets up a pipe line and a standard markup of the data is transferred.
CSS based grids are a powerful approach to creating HTML web interfaces. There are now hundreds of CSS Frameworks that support Grid layouts.
I’ve used the 1KB grid system before, but have yet to really understand how to use grid systems well. Recently I was taking another look at this topic. One thing is very obvious, the documentation and examples of many of these are not very good. What if there were an example that all framework would implement that would showcase their use? In programming we usually would do a “Hello World!” program and extend that to say the same thing using the technologies in question, like JMS, for example.
So a solution would have to exercise the core feature set or solution set of a generic grid, and be easy to implement. One should not have to be a high-order expert in CSS to understand the solution. Below I created a simple ‘Hello world!’ that shows nested columns in use. Of course, it would need expansion to show more features and difficulties in using a CSS Grid. It would also need a real designer to create something that also looks good.
Example of nested columns using BluCSS. I just took the demo page as an example and changed it to do what I wanted. The BluCSS stylesheet includes media queries to help toward a “responsive” design. Note that I changed the “container” to 100% width. Why have large monitors if pages won’t use the space?
Screen capture of my example rendered in FireFox’s Responsive Design View. In this tool a 1200 x 800 viewport correctly allows rotate, however, on a real smartphone, Samsung Note, the vertical view is stacked. Hmm.
Created an example ‘hello world’ CSS Grid use. The example also includes the use of AJAX with JQuery to show the HTML source. Not part of the topic but was an interesting thing to get to work. I wonder if that technique can be used to show other things like log files.
Question: Since CSS Grids rely on horizontal flow, how are they a grid? Maybe they should be called CSS sliding rows that sometimes align into columns. 🙂
This is based on an old web page. I thought I would store it on this blog just for backup. It was an attempt to construct a predictive interface. Since a users actions could not be completely predicted, it seemed to demand an approximation system. BTW, there is a new field called Soft Computing that this type of investigation is related to.
In February 1992, while watching my daughter learn how to use our home computer, I came up with a new product idea. Hopefully I will post more about that would-be product on this site. The following presents the approach I used for its controller.
I could not get over some technical hurdles with the product, so I put it aside until I could get back to it. One thing that I was proud of was learning a little bit of Fuzzy Logic and using it as the controller. I even wrote a graphical simulator in the C++ language; threading was fun stuff. Watching the fuzzy sets behave like analog surfaces or neural EEG waves gave me the idea for the biomimicry aspects.
Lab Notebook entry September 13, 1995
Yesterday while at a UNISYS class at Burlington, Mass., all of a sudden a thought came to me full blown, I should save each resultant Fuzzy set for each object and add to it instead of resetting it at each sample of the user focusing.
Lab Notebook entry September 23, 1995
It works! I added fuzzy memory to the prototype and it works beautifully! No jitter except for the GetZone() function which doesn’t use fuzzy memory.
I think this development will allow me to do away with the focal space creation entirely.I also started writing a research paper called Controller Using Fuzzy Memory. I should call it short-term memory
since the fuzzy rules serves as the long term memories. Thus, there are three “organic” memories, Long-Term, Short-Term, and Reflective, which correspond to Instinct, Learning, and Reflex.
A Year later in 1996, I dusted off my notes and looked at the Fuzzy Logic Controller (FLC) again, and thought maybe someone else can make use of it, or lead to some other ideas. So, I quickly wrote it up and sent it to a journal for review. The paper aludes to biomimicry, I tried to duplicate how animal systems have both a voluntary and reflex arc (but I never showed how the reflex would override the “voluntary” control output. Perhaps by using weights?).
It was a ginormous rejection! And, reading it now (2006), I would have rejected it too! A little knowledge is dreadful. A snippet of what the reviewer wrote: “It’s hardly clear what’s …. The paper continues without much of an organization & jumps from posterior possibilities (hardly the case) to ANN.”
The reviewer suggested the book: An Introduction to Fuzzy Control by Driankov, Hellendoorn, and Reinfrank. I purchased it recently. Very good book. Whats funny is after reading a little bit of this book, my paper’s main innovation is still valid, as far as I can see.
I make no apologies; I was a computer programmer at the time and not trained as a researcher. I learned enough to get the job done. I read a bunch of stuff and just put it all together.
So, once again I dusted off the notes and exported the paper to HTML. I have not attempted to check if the idea is original or still bad. The contents are unmodified. The diagrams were redone since I lost the digital version. Don’t ask me what the diagrams mean, I vaguely remember.
If I had time I would have liked to see if it could compete with a Kalman filter controller. But, that would have meant learning a lot more, like what “Gaussian white noise with covariance matrix” means. Yeah right.
What is this about?
This section is under construction and was meant as a non-technical introduction to Fuzzy Logic Control.
A Fuzzy Controller, the ones I was aware of then, and even most simple controllers, sample control variables, and based on the error calculations, produces control outputs. The sampling is performed on a set time period (or triggered by input events). So the controller samples some values, does some calculations, then outputs a control value.
In a car cruise control system, for example, every few milliseconds the speed is checked and the calculation determines how far the speed is from the desired cruise control speed. The output is increasing or lowering the throttle to change the engine speed. The amount of engine speed increase or decrease level is based on the size of the error.
Of course, actual controllers are more complex then this. The important point in terms of this paper is that each “cycle” forgets the previous cycle’s sampling (in simplest cases). Josef Betancourt, last modified 8/15/2007 – 10:03:51 PM
Inefficiencies in the conventional fuzzy logic controller are discussed, and a model to overcome them is presented. Conventional Fuzzy Controllers produce a Fuzzy Set output for each output control variable and defuzzifies these to produce a crisp control variable. Instead of using the fuzzy set output for defuzzification, the presented model uses Short-Term Memory. This memory is an aggregation of the sets produced at each control cycle. This allows inter-cycle evidence formation which improves the control processing. Defuzzification of this short term memory more fully uses information generated by the inference engine.
Since it seemed to meet all of the requirements, a Fuzzy Logic Controller (FLC) was chosen for use in a new product development. However, the FLC did not perform well. Many attempts to improve the controller did not succeed: alpha cut thresholds were added, the term sets tinkered with, knowledge pools created, and different fuzzy logic operators tested. No doubt there is a combination of FLC parameters and pre and post process filtering that would work; however, such a search seemed daunting.
Finally, by an intuitive leap, a simple modification to the FLC model was found that could make it work. And this solution seemed to be unique and usable in other applications.
Fuzzy Logic Controllers
During a control cycle, a FLC, see figure 1, fuzzifies the output of the controlled process and applies this to a fuzzy logic engine which uses a knowledge base, term sets, and Zadehan logic. An output fuzzy set is then generated and finally defuzzified to produce crisp outputs to supply to the controlled process. During the next control cycle the reinitialized output fuzzy spaces, where the interim fuzzy sets are aggregated, are again reused. The operation of such a controller is well documented in the literature.
There is a basic inefficiency in this FLC model; the information generated by the FLC operation is not fully used. First, defuzzification, though necessary, is a contraction of dimensionality, thus information loss . Second, by reinitialization of the output fuzzy space in preparation for the next control cycle, the generated information is not used to effect subsequent cycles. Even in Adaptive FLCs, the exogenous ad hoc processes, such as an adaptation engine or a neural network, that monitors the FLC operating characteristics and modify the parameters or knowledge base, only affect a change after several processor cycles. This information loss is also partially true in feedback FLCs, since after information is fed back, they also operate in a conventional feedforward manner. Third, the evidence formation is intra-cycle, that is, the contrary effects of rule firings occur only within a control cycle. Contrary evidence from cycle to cycle is not used. For example, if a particular cycle generated the fuzzy output set HIGH THROTTLE with a maximum height of .9, and in the next cycle HIGH THROTTLE now has a maximum height of .2, then that is inter-cycle contrary evidence in that output space.
This information loss and adaptation delay is easily reduced by the use of Short-Term Memory (STM). As shown in figure 2, a memory space is added to the FLC to store fuzzy sets. In a multiple output system each variable would have such a memory space.
Each cycle of the fuzzy logic engine produces an output fuzzy set called the Reflex-Term Memory (RTM), and this is used to update STM. It is the contents of STM that are Defuzzified to produce the controller manipulated variable Om. Optionally, to improve throughput and conserve resources, instead of creating a new RTM in each cycle, the fuzzy memory can be updated directly.
Now, since some information is stored to influence the following cycles, every cycle improves the controller. Furthermore, since Defuzzification is performed on the updated memory and not on the greatly changing reflex fuzzy set, a smoother control surface that rapidly converges to the optimum output control value without overshoot is expected. This smoothing “naturally” resists transient or noisy input.
The method of updating fuzzy memory must be a suitable Aggregation operation that provides the optimum response for the particular control application. Ideally, to allow adaptation this operation should be parameterized, such as a Generalized Mean .
In the new product that inspired this FLC model, the aggregation operation used is a simple arithmetic average
Where ri is the i th membership value at the t cycle of the reflex-term fuzzy set and si , of the short-term fuzzy set,. Another operation also tested is an exponential smoothing of the membership values
Where a is thesmoothing parameter .
Note that when the stored membership value is zero, the current value is fully used. The rationale here is that if there is no contrary evidence, what remains should be fully used.
These aggregation operators properly accumulate evidence, but most importantly, contrary evidence, rules firing with low value, reduce the fuzzy set surfaces. Interpreting the fuzzy sets as Possibility Distributions, this feedback creates, analogously to Bayesian posterior probabilities, posterior possibilities.
Other aggregation operations are possible. For example, since the output of a FLC is usually defuzzified, and normalization is not required, a simple unboundedsum, which acts as strong positive feedback, may be adequate in certain situations. This type of aggregation accumulates all information and is useful in applications that undergo mode changes, choose among options, or are human interacting.
One important concern with any aggregation method is overflow or saturation of the fuzzy memory space. This may be addressed in many ways. For example, a threshold or normalization can be applied to the fuzzy set memory. Another option is to reset the fuzzy memory when the controlled process undergoes distinct mode changes or operations. Lacking such reset criteria, the FLC system can accomplish this through autoadaptive methods.
This FLC model adds to the many FLC adaptation options available. Such as opportunities for the use of new metrics. For example, the reflex fuzzy set output can be defuzzified to produce an internal control value Oc. Comparing this to the STM control output, Om , provides metrics that can be used to qualify or change the FLC system: the aggregation operation can be changed to either increase or decrease the smoothing effect, or this can be used in run-length memory reset control.
Similarly, comparing the reflex fuzzy set surface with the STM fuzzy set provides other metrics that can adapt the generation of the reflex fuzzy sets by modifying the input sensitivity, term sets, rule weights, etc. Figure two illustrates a possible schematic for such an adaptive FLC model.
Fuzzy memory can also be viewed as a form of Artificial Neural Network field or layer. Thus, the techniques used in ANN can be applied. Furthermore, since Neural networks sum throughputs while Fuzzy systems sum outputs , the aggregation of Reflex output in Short-Term Memory brings a FLC closer to the operation of a neural network.
The presented FLC model has the ability to accumulate evidence in each cycle for discrete changes of state in a specific new product under development. This is accomplished by the use of a tripartite memory scheme: Reflex, Short, and Long-Term. The Reflex corresponds to the conventional FLC output fuzzy sets, the Short-Term to the aggregated fuzzy memory space disclosed above, and the Long-Term to the Fuzzy Logic Rule base used in the controller.
The term ‘Reflex-Term Memory’ is an indicator for future use in complex Soft Computing controllers having separate ‘reflex’ control paths that bypass the high-level knowledge base and inference engine ‘intelligence’. There can even be competing parallel control paths, each using different technologies.
Perhaps the presented model may offer the same advantages in other applications areas. For example, since the updating of the STM is a trans-cycle evidence accumulation, it may be useful in a multistage decision system. Each goal or solution variable is allocated a fuzzy memory. The decision system updates these in each step or cycle to ultimately produce a crisp score for each.
Another example is an automotive cruise control with automatic inter-car safety gap system where evidence for a mode change is accumulated until a certain compatibility level is reached, then the fuzzy memory is reset and the process repeats. Cruising along at 195 mph in the future super highway, the controller should use all the evidence, all the time!
Further research is needed to determine the actual differences, performance, applicability, and advantages of this model compared to both conventional and fuzzy controllers.
1. Cox, E., Fuzzy Logic For Business and Industry, Charles River Media, Inc., Rockland, MA., 1995, pg 107.
All rights reserved. No part of this document may be reproduced or transmitted in any form by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from Josef Betancourt.
Author Josef Betancourt
Originally written: 1996-03-30T03:13:00Z words: 1406 characters: 8019
As discussed in the previous post “How to Measure User Interface Efficiency“, I stated that it is easy to create a User Experience Design (UXD) or Interaction Design (IxD) interface that can minimize the cognitive and manipulative load in executing a specific task. This interface must be usable in the three most used interaction modes: graphical, voice, and text.
Let’s review the problem. A user desires some action X. To trigger X, there must be one or many sub-steps that supply the information or trigger sub processes so that X can be successful. X can be anything, an ATM transaction, insurance forms on a website, or sharing a web page. Let’s use the last example for a concrete discussion.
On my Android phone (Samsung Galaxy Note) when I am viewing a web page, I can share it by:
Click the menu button
View the resulting menu
Find “Share page”
Click “Share page”
Get a menu “Share via”
Can’t find it
Scroll menu down
Get Message app, ‘Enter recipient’
Click Contact button
Get ‘Select a contact’ app
Click ‘Favorites’ button
Search for who you want to sent to
Put check box on contact’s row
Click ‘Done’ button.
Get back to Message app
Click ‘Send’ button
And, that is just a high level view. Note that, of course, systems can use recently used lists or search to reduce the complexity. If you include the decision making going on, the list is much greater. Other phones will have similar task steps, hopefully much shorter, that is not the point. The interaction diagram is shown in figure 1. TODO: show interaction diagram.
This interaction is very quick and easy. The fact that is has so many steps is symptomatic of the user interfaces and has many drawbacks.
Cognitive load: Despite all warnings and prohibitions, mobile devices will be used in places they should not be, like cars. These task manipulations just make things much worse.
Effort: All of these tasks eventually add up to a lot of effort. Ok, if this is a social effort, but when part of a job not profitable.
Accuracy: The more choices the more possibility of error. As modern user interfaces are used in more situations this can be a problem. Does one want to launch a nuke or order lunch?
Time: These tasks add up to much time.
Performance: As we do more multitasking (the good kind), these interactions slow down our performance. Computer performance is negligible.
Interacting with computer interfaces is just too complex and manipulative. How can this be made simpler?
In the industry there has been a lot of progress in this area. However, the predominant technique used is the Most Recently Used (MRU) strategy. This is found in task bars, drop down menus, and so forth. Most recently in one Android phone the Share menu item has an image of the last application used to share a resource. The user can click the “share…” and use the subsequent cascading menu or click on the image to reuse that app to share again.
This is an improvement, however, as discussed below, there are further optimizations possible to actually invoking via the selected sharing application.
Use prior actions to determine current possible actions. What could be simpler? In the current scenario, as soon as I select the ‘Share’ option, the system will generate a proposal that is based on historical pass action. Note this is not just “Most Recently Used” strategy, but also based on context. If I am viewing a web page on cooking and click share, most likely I am targeting a subset of my contacts that I have previously shared “cooking” related pages with.
Now I can just switch to that proposal and with one click accomplish my task. If the proposal is not quite what I had in mind, I can click on the aspect or detail that is incorrect, or I can continue with my ongoing task selections, and each successive action will enhance the proposal.
The result is that in best case scenario, the task will be completed in two steps versus twenty. A 90% improvement. In worse case, the user can continue with the task as usual or modify the proposal. But, the next time the same task is begun, the generated proposal will be more accurate.
What does a proposal look like? Dependent on the interaction mode (voice, graphical, gestural, text), the proposal will be presented to the user in the appropriate manner. Each device or computer will have a different way of doing this which is dependent on the interface/OS.
Let’s look at a textual output. When I make the first selection, ‘Share’, another panel in the user interface will open, this will present the proposal based on past actions. If there was no past action with a close enough match, the proposal is presented in stages. This could be a simplest form:
Of course, it would look much better and follow the GUI L&F of the host device (Android, iOS, Windows, …). In a responsive design the proposal component would be vertical in a portrait orientation.
The fields on the Proposal will be links to the associated field’s data type: email address, URL, phone, and so forth. This gives the user a shortcut to invoke the registered application for that data type. In the above example, if I am not sending to Mary, I just click on her name and enter the contacts application and/or get a list of the most likely person(s) I am sending the web page to (based on web page content, URL, etc.). Also, if I am not sending an SMS message, when I click something else, like email, the proposal changes accordingly. When I send email, I am generally sending to a co-worker, for example.
To present an analogy of a similar approach, in Microsoft’s Outlook application one can create rules that control the handling of incoming email. A rule has many predefined actions in the rule domain specific language (VB code in this case). See figure 3. Of course, the Outlook rule interface is not proactively driven. You could select the same options a million times and the interface will never change to predict that.
A proposal is an automatically dynamically generated rule whose slots are filled in by probabilities of past action. That rule is translated into an appropriate Proposal in the current UI mode. When that rule is triggered, the user agrees with the proposal, the associated apps that perform the desired task are activated.
Predictive interfaces are not a new idea. A lot of research has gone into its various types and technologies. Amazingly in popular computing systems, these are no where to be found.
Interestingly, Games are at the forefront of this capability. To provide the best game play creators have had to use applied Artificial Intelligence techniques and actually make them work, not fodder for academic discussions.
Even Microsoft has had a predictive computing initiative, “Decision Theory & Adaptive Systems Group”, and had efforts like the Lumiere project. Has anything made it into Windows? Maybe the ordering of a menu changed based on frequency.
I came up with this idea while using my Samsung Galaxy Note smartphone or “phablet”. Using the same phone I brainstormed the idea. Here is one of the diagrams created using the stylus:
“A Comparison between Decision Trees and Markov Models to Support Proactive Interfaces“; Joan De Boeck, Kristof Verpoorten, Kris Luyten, Karin Coninx; Hasselt University, Expertise centre for Digital Media, and transnationale Universiteit Limburg, Wetenschapspark 2, B-3590 Diepenbeek, Belgium; https://lirias.kuleuven.be/bitstream/123456789/339818/1/2007+A+Comparison+between+Decision+Trees+and+
“On-line Case-Based Planning“, http://www.cc.gatech.edu/faculty/ashwin/papers/er-09-08.pdf, Santi Onta˜n´on and Kinshuk Mishra and Neha Sugandh and Ashwin Ram, CCL, Cognitive Computing Lab, Georgia Institute of Technology, Atlanta, GA 30322/0280, USA
My frustration level reached a peak while using a mobile phone. So, again, I’m thinking about GUI design. Why are the interfaces so bad and how to fix them?
First step is just figuring out how to measure the badness. There are plenty of UI measures out there and many papers on the subject. BTW, I’m just a developer grunt, coding eight hours a day, so this is out of my league. Yet, the thoughts are in my head so ….
To get to a goal takes work. In physics, W = Fd. Work equals force times distance. No direct correlation to user interface. But, what if W is equal to user interface element activated times number of possible objects to act upon, i.e., W = U x O. Work equals UI force times number of options. This ‘force’ is not a physical force or pressure, of course. It is a constant mathematical value.
Example, you click on a button and then you are confronted with a choice of five options. Lets say you are reading a web page and you want to share it with someone. This takes too much work, way too much. Even getting to the sharing choice is monstrous; click the menu button, click share, find which method of sharing, get to contacts app, blah blah.
So, here is what we have. Activating a user interface element is a force; each type of element is given a constant value, a button is 10, a scroll bar is 100, and so forth. The number of options that results and is relevant toward the end goal is the ‘distance’.
Now you divide this resulting value by how much time it took you to get there and you have Power. P = (U x O)/T. (Update 7/26/2013: Probably a better dimension is actual distance of pointer movement or manipulations).
Add these up for each step in completing the goal and you have a metric for an interface user story.
Why use the number of options for distance? The number of options presented to the user is stress. Kind of related to Hick’s Law, “The time to make a decision is a function of the possible choices he or she has”. If computers and software were not in the 1960s (face it modern stuff is just fancy screens) they would know what the hell I want to do.
A follow up post will give the solution to this User Experience Design (UXD) or Interaction Design (IxD) problem, and the solution is actually pretty easy.