Social web means slow web?

I just visited a news site.  My laptop is reasonably fast for web browsing and I have a fast enough network connection.  Yet, the site did not pop up, instead one fourth of it loaded.  The rest of the time was spent (as shown in my status bar on FireFox browser) api.facebook.com,  google analytics, twitter streams, RSS stuff, and so forth.

I realize that someone somewhere has to pay for the social web.  Just like consumers indirectly pay for broadcast TV.  I’m all for it.  But, I see some problems up ahead as the proliferation of rich content and social networks mixed in with revenue generation processes clash.  Already there are some bandwidth problems as people start to use more streaming media on the net.

What is the solution?  If I knew, I’d be rich.  I’m sure it will involve greater use of Content Delivery Networks, Edge servers, Grids, and Torrent-like technologies.  But, did I mention that security and privacy concerns put a kink on any mad computer scientist scheme

Computer vision landmark by Google Labs

An alternative to using search of existing images for object recognition is to create a system that would generate the base images dynamically. For example, the actual target object or model of the target is put inside a sphere where one of more digital cameras can move….

I was reading this post on the Groovy research blog.  Very interesting.  They ” … present a new technology that enables computers to quickly and efficiently identify images of more than 50,000 landmarks from all over the world with 80% accuracy”.  Of course, they use massive computing resources and multiple sources to create humongous databases so that one can do image comparisons etc.

While reading I thought of an alternative method, and since this is not my field probably a very naive mess too.   Anyway, I would create a system that would generate the base images dynamically.   For example, the actual target object or model of the target is put inside a sphere where one of more digital cameras can move.  These cameras would take snapshots from their respective 3D angle and feed the results to a process that would index them and “clean” them up.  By controlling the radius of the sphere and the snapshot per travel (FPS), different resolutions of the same target object are made available.  Now that an object has many images, they can be manipulated, such as low light conditions, occlusion, and so forth can be added as filters during an actual search for identification prior to the actual image matching tasks.

I’m sure my idea is not new.  Probably related to Stereolithography, MRI imaging, ray-tracing, and other powerful computer graphic subjects.