Toward predictive interfaces: 'Google Now on tap'

In a prior post I compared “Google Now” and the concept of a Proactive User Interface. It looks like ‘Google Now on tap’ will finally be a step in the right direction.

My first impression from a quick read of some articles is that it is an expansion of the info cards concept with more correlation with current UI context. This is such a powerful, and an extremely obvious feature, that you wonder why this was not done years ago. True, Google will put more search and Big Data power behind this. But, is it really predictive and will it “learn” a users information patterns?

An information pattern example (from my prior post) is a User is viewing a web site. There is a probability that if a certain amount of time is spent or a certain page or article type is visited, that clicking a share button will be followed by predictable actions. For example, sharing a link with a colleague or loved one. The UI presented will then present a proactive plan. See “Proactive User Interface“. Generating information related to context is still requiring the user to perform wasted effort to form and act on immediate action plans. So what are those octocore chips for?

Proactive Interface v2
Diagram of idea


  • Nov 9, 2015: Google just open sourced a Machine Learning system, TensorFlow.


Is Google creating an alternative to RSS?

Google dropped Reader, and someone reported that “Google Alerts” RSS feed is not working. Maybe Google is working on replacing the RSS or Atom web feed technology so that it would work more in line with Google+, and like Facebook, create a “walled garden.”

We know Google does some great work correcting or improving existing approaches, for example, ‘Protocol Buffers‘. Syndicated feeds are ubiquitous if lately on the downswing, perhaps time for fresh thinking on the approach. Certainly modern web technology has much more powerful stuff like XMLLHttpRequest, Server-Sent Events, WebSocket, and WebRTC.

Further reading

Microsoft Offers Reward for any successful Bing search!

It must mean something that to find something on Microsoft’s own sites, you have to use Google. Even within Microsoft Office apps, searching for something is like embarking on a mythical quest for some holy foo.

Come on Microsoft, you supposedly hired the best, but we’re getting the worst. If within Office Word I open the Help window and search for “remove horizontal line”, relevant hits should show up, and they should only be related to Word. In fact, if I put quotes around it, nothing is found. No, I won’t check my spelling, didn’t those PHD,s put some algorithm to account for spelling?

Let me try the same search on Google, boom, 0.27 seconds later, 35,700 results.

What I’d like to see in the Wall Street Journal one day, ratings for:

Suckiness Evilness Richness
Microsoft ? ? ?
Google ? ? ?
Apple ? ? ?

By the way, this is all in jest and constructive feedback. Just in case I apply for a job at Microsoft. Do you hire average intelligence but a lot of creativity and good looks? If you don’t, maybe that’s the problem.

Computer vision landmark by Google Labs

An alternative to using search of existing images for object recognition is to create a system that would generate the base images dynamically. For example, the actual target object or model of the target is put inside a sphere where one of more digital cameras can move….

I was reading this post on the Groovy research blog.  Very interesting.  They ” … present a new technology that enables computers to quickly and efficiently identify images of more than 50,000 landmarks from all over the world with 80% accuracy”.  Of course, they use massive computing resources and multiple sources to create humongous databases so that one can do image comparisons etc.

While reading I thought of an alternative method, and since this is not my field probably a very naive mess too.   Anyway, I would create a system that would generate the base images dynamically.   For example, the actual target object or model of the target is put inside a sphere where one of more digital cameras can move.  These cameras would take snapshots from their respective 3D angle and feed the results to a process that would index them and “clean” them up.  By controlling the radius of the sphere and the snapshot per travel (FPS), different resolutions of the same target object are made available.  Now that an object has many images, they can be manipulated, such as low light conditions, occlusion, and so forth can be added as filters during an actual search for identification prior to the actual image matching tasks.

I’m sure my idea is not new.  Probably related to Stereolithography, MRI imaging, ray-tracing, and other powerful computer graphic subjects.