Tuesday, June 30, 2009

Prototyping books

Books are encapsulated (and structured) written expression. Just like a formal speech is encapsulated spoken expression, a musical performance is encapsulated musical expression and so on. All forms of encapsulated expression involve the generation of ideas, experimentation with various forms of expression, arrangement of pieces of expression into some linear or spatial order, reorganization of elements as the encapsulation takes shape, and so on. We recognize all these as various steps in a process of prototyping. Prototyping is needed because all encapsulated expression (except for extempore or improvisation) in intended to assume a definite final form which is then frozen for future performances. Unlike theater, recorded encapsulated expression such as books and music don't involve the original artists themselves.

Tools for book prototyping have been around for hundreds of years but only in the past few decades have they begin transforming radically. The earliest tools were palm leaf, parchment, tree bark, or some such surface and some kind of stick or a quill along with ink. Typically, the 'artists' themselves wrote out the work and this was then made available to readers. Later, the role of 'copying writers' emerged whose task was to merely make exact copies of the original through writing. Gutenberg's press changed that process and the 'copying writer' was replaced by a 'typesetter' who laid out the type after which any number of copies could be made. The artist/writer created the original manuscript on paper, the typesetter employed the manuscript to set type, and the printer made any number of copies required.

The advent of the desktop computer and laser printer made it possible for anybody to be the writer, 'type setter' as well the printer all rolled into one.

Today book publishing has become a huge business, but it has also created some dilemmas. Book publishing works on the principle of economies of scale. Publishers want to ensure that there will be sufficient demand for a certain work before proceeding to publish it. Consequently, publishing is a guessing game, and many potentially popular works are rejected, while several duds are published and ignored by the market. Publishing becomes a game of percentages. Books are published in batches and fresh batches are printed only if publishers forecast a sufficient demand.
Laser printers are fine for printing a few copies to share with friends and family, while book publishers require estimates of many thousand - hopefully tens of thousands - before agreeing ot to publish a book. Some once popular books go out of print and then become difficult to access.

A solution is needed to fill the middle ground and books on demand may just be the disruptive innovation to do that. Print on demand makes it nearly as economical to print ten copies of a book as one hundred since the overheads are low. The technology also ensures that (assuming copyright issues are sorted out) no book will ever go out of print. Additional benefits emerge - it will no longer be necessary to warehouse any but the most popular and fast-moving titles; books with low demand will always be available if the customer is willing to pay a little more and wait a little longer. And people looking for rare works usually are.

Writers are free now to actually prototype books: they may write one and print out a small number of copies and based on demand and feedback, update the book and print many more. Of course, this is already possible with electronic publishing, especially on the web. And Project Gutenberg is making many classics that have entered the public domain available on the internet. Further, book readers such as Amazon's Kindle are trying to make the physical book obsolete. But it will be a while before physical books fall out of favor; while the demand for physical books may reduce, they will continue to exist because they are more durable than electronic devices, don't need batteries and one never need worry about changes in data storage formats and incompatibilities.

The Espresso Book Machine discussed in this article in the Boston Globe is pioneering printing books on demand at the Northshire bookstore.
When the machine is connected to an expanded online catalog of titles later this year, Morrow said, the bookstore will be able to offer customers an “ATM for books’’ that will provide access to millions of works.

“The idea is that soon we’ll be able to print out any book that’s ever been printed,’’ he said. “That could really change people’s image of the small bookstore.’’
...
In its first year, Northshire’s book machine printed dozens of original books by customers, including memoirs, autobiographies, poetry collections, and cookbooks, usually producing from 30 to 50 copies of each. The bookstore also published a young adult novel written by a local 12-year-old and a previously out-of-print guide to Manchester.

Self-publishers pay a $49 setup fee and a per-page rate that ranges from 5 to 9 cents, depending on the length. Northshire provides an a la carte menu of editorial and design services from a network of providers. Copy editing costs 1 cent per word; book design services, $40 an hour.
...
Rodefeld, a former graphic designer who works at a tiny desk next to the Espresso machine, produces up to 35 books a day. “It’s exciting to see an author’s face when I hand them the first book off the press,’’ she said. “To see the dream, the fantasy, become a reality - that really tickles me. I get to be Santa Claus all the time here.’’
...
The numbers at Northshire Bookstore, Morrow said, are “on the cusp’’ of working out. The big payoff will come, he said, when the Espresso machine is seamlessly connected to the entire universe of books, allowing the store to fulfill any request in minutes.

MIT Eureka Fest 2009

High school student projects. [Thanks to CrunchGear.]

"You may use your class notes and Feynman"

Great story! Probably apocryphal, but the kind of story I want to believe because I love the man so much.

---------------------------

Posted July 30, 2001 06:15 | Category: Story | #

Since Caltech has an honor system most exams tend to be take home and open book. The instructor for the class will write any special directions at the top of the exam. For a freshman physics exam one year the instructions read:

You have 3 hours.
You may use your class notes and Feynman.

"Feynman" of course referred the Feynman physics lecture notes which are published in three volumes.

On reading these instructions one particularly alert student grabbed his exam and raced across campus to Richard Feynman's office. He handed the exam to Feynman explaining that the directions clearly indicated he was a valid resource. Feynman glanced over the instructions and agreed. He wrote out the exam in less than half an hour, and got a perfect score!

Posted to alt.folklore.college by daly@strawber.princeton.edu (John Daly) On 3/8/92
-----------

Monday, June 29, 2009

Feynman: The inconceivable nature of Nature

I gotta stop this, else I'll end up posting every dang video of Feynman here. What an incredible teacher! He breathed such life and color and intensity into every little idea about the universe. He must have received a standing ovation at the end of each of his lectures.

Feynman's gotta have his orange juice

What a riot!

Sunday, June 28, 2009

Richard Feynman on science and aesthetics

Richard Feynman is my favorite scientist. Scratch that -- he's my favorite thinker, period. Delivered in his inimitable style and East Coast accent.


Green Box - innovative pizza delivery box

With a nod to Daniel Pink where I found this video. Very, very neat -- especially the leftover box at the end. Just when you thought there no more innovation possible in this most minimal of designs comes this redesign which requires no additional material or even a significant change in the manufacturing process. And yet avoids using additional material to wrap any leftovers. Leftover box takes little room in a refrigerator. This is pure genius.

This one's for you, Michael ...

To be honest, I never was one of your fans. And yes, over the last decade or two, whatever fame and adulation you may have garnered, was overshadowed -- with considerable support from a rapacious media -- by your eccentricities, of you which you had more than your fair share.


But then, I was not among legions - hundreds of millions, likely, perhaps even a billion or more, it appears now - of kids who had grown up listening to your music and who never gave up trying to emulate your style, especially your dancing. Yes, you were a god up in the sky to them, but you also made it possible for them to dream that they could dance like you some day. And man, you knew to dance, you knew to do things with your body that other grown ups couldn't even imagine doing; but not the kids, their minds, their bodies, were pliable, and the beat, and the music, it was so infectious, so contagious, they just had to go out and try doing it themselves. And they were happy to just try and to feel that they had managed to accomplish at least some part of what you did so well.

You delighted, you entertained, you, at least for a brief moment, uplifted the spirits of millions of people, young and old, not just in your town, not just in the country of your birth, not even just people of your race or color. You cut across all barriers, Michael, and when you sang, We are the world, it was utterly believable, for you had the ability to captivate the hearts of people from every culture and from every social stratum and from every generation.

You were an original, Michael. You took the seemingly ordinary, polished it, perfected it, enhanced it, transformed it, infused it with life, with power, with intensity, with passion, and yet made it all look so easy. You melded music, movement and drama into one seamless, inseparable spectacle. All the kids wanted to be you, be Michael Jackson, the greatest entertainer yet to walk upon the earth. Your influence stretches across the planet, and entertainers in every land owe at least a small debt to you. The film industry in India has constructed an entire genre of song and dance derived directly from your innovations.

Like many original and innovative persons who have graced this planet, your life was short, and tragic. Why is it almost a law of nature that those who give of themselves the most must also suffer the most?

But it's over now, Michael, you will suffer no more. The media will no longer mock, taunt and haunt you. The hacks will return to their sordid lives, but you will remain forever in the hearts of the masses you delighted. And generations from now, they will speak of the man who brought so much joy to life. They will still be doing the moonwalk.

Thank you, Michael, rest in peace. Know that you will be loved forever.

Friday, June 26, 2009

From left field: the unpredictable impacts of innovations. And of deaths.

Every now and then a monumental event occurs, or a seemingly innocuous innovation enters human society which then dramatically alters its configuration, power structure, processes, communication and a whole lot else. It's hard to tell after the passage of many years how dramatic the impact was, but we live in a time when many such events occur.

The impact of the world wide web has not only been well-researched, it has been experienced all over the world. The web caused a very rapid, nearly discontinuous change in the way the peoples of the world generate exchange and absorb information. Many predictions about the world made before 1990 are practically worthless -- or change happened much, much earlier than may have been anticipated.

The digitization of music followed by the infrastructure to stream it over the web electronically without the need for permanent recording media like tapes, vinyl records or CDs has rendered the music publishing industry in its current form virtually superfluous. And the same impact is beginning to be felt in the book publishing business -- ironically, after, rather than before the music business, although text and graphics were available on the net much before music was.

And now, the Apple iPhone. According to one report, within a week of the introduction of the iPhone GS which is capable of recording video, there has been an incredible 400% surge in YouTube video uploads. Why? The iPhone makes it trivial not only to record, but also to edit and directly upload video from the phone, eliminating the intermediate step of transferring video to a computer, editing it there, and uploading it.

Again and again we see that one of the most common ways in which innovations transforming existing structures and processes is by eliminating intermediaries or intermediate steps. Telephones, email, personal vehicles, personal computers, TV ...

Forecasting is a chancy game in these times; most forecasters end up looking foolish, eventually. Who knew how popular was Michael Jackson? His untimely death is almost bringing down the internet with people communicating their grief, sharing stories and songs and celebrating his life.

Thursday, June 25, 2009

Innovate like Microsoft (Rip off the other guy)

You'd think success, even middle age, or Total Market Dominance -- or something -- would transform Microsoft. No such luck. The company that ripped off DOS (CP/M), Windows (Mac), Powerpoint (Persuasion), Internet Explorer (Mosaic), NT (VMS), and an almost endless list of technologies now rips off a small travel site called Kayak via its 'new/improved' Bing search engine. Take a look at the picture above and decide for yourself. And read the story too.

On the benefits of being 'scatterbrained'

More than two decades ago when I was in graduate school, pursuing a PhD, I nervously carried a draft proposal of a topic that really excited me to a faculty member whom I looked upon as a potential dissertation advisor. A European with a reasonable command of English she was reputed to be sharp, cold, curt, and fastidious. She was all that and more. She spent no more than about ten minutes with me (she was extremely organized) and during that span, she browsed through my apology for a topic analysis, marked it all over in red ink, and left deep long gashes in it. With each stroke of her pen, my enthusiasm dropped several feet, and by the time I left it must have gone right through the ground and emerged from the other side of the earth. I don't recollect anything she scribbled on the paper other than the following words: "you are scatterbrained".


Man, those words have reverberated in my brain for more than two decades -- they hurt, and badly. Needless to add, I never pursued either the subject or the faculty member any further. She went on to become a highly reputed researcher in the field and is now a member of various important international bodies and a consultant to a number of large corporations. She also divorced her husband at that time. Yeah, snarky, but I had to get that in. And honestly, while she has published enough material to fill a large truck, there is not one thing there that sets one's heart racing. It is dull, boring stuff, bordering on the obvious. Meticulous, methodical, rigorous ... all that stuff. Somebody's got to do it, for the benefit of science, I guess, and she did it. Good for her, and for science. I'm not sure if anybody would ever want to read her workmanlike writing again; certainly not me.

Me, I checked out other faculty in the department, eventually quit, and got my PhD under the most wonderful advisor that anybody could hope for, at another university. And I had the time of my life researching what I loved in the manner I wished. I remained (and to this day) a scatterbrain, which quality turned out to be an asset in the field I eventually settled on.

Now, in an article titled, A wandering mind heads straight towards insight the esteemed Wall Street Journal waxes eloquent on the benefits of being what that august professor deemed 'scatterbrained'. Referring to major breakthroughs such as that of Archimedes, the article says,
These sudden insights, they found, are the culmination of an intense and complex series of brain states that require more neural resources than methodical reasoning. People who solve problems through insight generate different patterns of brain waves than those who solve problems analytically. "Your brain is really working quite hard before this moment of insight," says psychologist Mark Wheeler at the University of Pittsburgh. "There is a lot going on behind the scenes."
So, Professor Methodical had her own way, and I had mine, and never the twain would meet.
In fact, our brain may be most actively engaged when our mind is wandering and we've actually lost track of our thoughts, a new brain-scanning study suggests. "Solving a problem with insight is fundamentally different from solving a problem analytically," Dr. Kounios says. "There really are different brain mechanisms involved."
That was me.
By most measures, we spend about a third of our time daydreaming, yet our brain is unusually active during these seemingly idle moments. Left to its own devices, our brain activates several areas associated with complex problem solving, which researchers had previously assumed were dormant during daydreams. Moreover, it appears to be the only time these areas work in unison.

"People assumed that when your mind wandered it was empty," says cognitive neuroscientist Kalina Christoff at the University of British Columbia in Vancouver, who reported the findings last month in the Proceedings of the National Academy of Sciences. As measured by brain activity, however, "mind wandering is a much more active state than we ever imagined, much more active than during reasoning with a complex problem."
So here's my advice to you, gentle reader. Go out, daydream your heart out. You're allowed to daydream at least one-third of the time, anyway, as per the article. Daydreaming is good for you, and for society. Daydreaming could lead to stunning breakthroughs that could improve mankind's lot. But even if it didn't, someone engaged in daydreaming is not committing crimes, driving dangerously, or causing any kind of harm to the world. Now there's a two-for-one deal: society wins even if nothing comes out of your daydreaming. And you've had a great time too!

I'm thinking of launching a non-profit organization called Society for the Promotion of Universal Daydreaming with a large potato for a corporate logo (and mascot), symbolizing the legion of daydreaming couch potatoes that have made the world a better place. ;-)

Pizza Hut Minus Pizza = The Hut. And Pizza. Hunh?

Apparently, people -- if you can call those over 35 that -- don't want to eat pizza anymore. At least, not the kind of junk served at Pizza Hut. This has the corporation's mandarins worried. They sat around a swank table and exclaimed, 'Holy crap, they don't want to eat junk anymore?! I wonder why?!' So they went and asked them (the 35-and-up geezers). And they said (and I quote):

one of the big things that would reignite their passion with the category is to have a pizza made with multigrain crust and an all natural tomato sauce...
All natural tomato sauce! Now, who'da thunk! After all the decades and billions of dollars we have spent convincing people that synthetic crap is good for them, those ingrates want to eat healthy, natural stuff! Oh, the nerve!

After the dust settled, the Chief Pizza Officer and his Condimental Lackeys decided to serve customers stuff that actually grows in the ground. Just to play it safe, they decided to change the corporation's name from 'Pizza Hut' to just 'The Hut'! Ain't that cool and all?! They did this, because ... wait for this ...
... that ties in nicely with (today's) texting generation.
Oh, yeah, it does! You see texting limits you to 140 characters, and we were able to knock off a whole 2 (two) characters from the name! We are so ingenious! Now, whenever people come across the 'vocabulary word' (which is what we call it) 'hut' they will immediately associate it with pizzas! Indeed, as they travel around, especially in the developing world, they will come across many huts, and just looking at them will immediately generate a craving for pizza, and they'll rush back home immediately for their favorite pie!

And then look at our real game changer ... a red-colored box! Now, that's a first in the pizza business! No more dull, brown boxes, but a brilliant lip-smacking red one, to get the gastric juices flowing.

Crap by any other name remains crap. Taste and smell are the most powerful senses that are hard to influence through the mind. There is innovation, and then there is stupid stuff like this. Pathetic. Some marketing consultants must be laughing all the way to the ATM.

Wednesday, June 24, 2009

The Idea of Sam Pitroda

Twenty-five years ago, before we had Azim Premji and Narayanamurthy to inspire us; before Abdul Kalam fired our imaginations and became a household name; there was Sam Pitroda to provide leadership in advanced technologies to an emergent India. Pitroda grew up in a humble Gujarati family in Orissa and after moving to the US, made it big as an telecom entrepreneur, eventually becoming a US citizen. He was invited by then newly anointed Prime Minister Rajiv Gandhi to help navigate India into the 21st century on a technology platform. Pitroda was reputed to have turned in his US citizenship in order to serve as an advisor to Rajiv Gandhi.


In the end, the Idea of Sam Pitroda was more successful and enduring than any initiatives he got going. He served as a beacon of inspiration to a whole generation of Indian engineers, many of whom have gone on to set up their own projects. And indirectly, he likely revolutionized telecommunications in India.

Eventually, Rajiv Gandhi -- a technology champion who was India's first Prime Minister to visibly use a desktop computer in his office -- was voted out of office, and Pitroda lost his key champion in the Indian government. A disappointed Pitroda returned to the US, where he continues to be based, but his desire to help transform India through technology has not lost any of its intensity.

Flipping through banal TV programs, I managed to catch snatches of an interview with him on NDTV Profit. In that part of the interview, Pitroda was asked to address the problem of government money and resources meant for villages rarely reaching them because of the manifold layers of middlemen who took their cut, leaving next to nothing for the intended recipients. He responded that technology could eliminate all the layers of middlemen and ensure that the losses and inefficiencies of transfer were minimal.

I agree with him on this: as various governmental operations get computerized, paper files get eliminated, and with that, the tendency of said files to gather dust, lose documents, or vanish completely -- unless the various public intermediaries are propitiated with monetary benefits. I am pleasantly surprised at some of dramatic changes in the past 25 years.

But let's address another, more structural problem: why did we end up with so many layers in the first place? It has been remarked that India lives in her villages and that is true in many ways including in an especially important manner: India has long had a decentralized society and culture made up of autonomous self-governing villages and cultures rooted in local geography and history. Over the millenia, kingdoms and empires have come and gone, but these powers have been only loosely coupled to the fate of the villages, where life proceeded quite independently of the transient powers that would stop by to collect tribute. The idea of centralization: be it of culture (and religion) or of society is an alien, western one. Centralized societies demand a homogeneity of belief and practice that is unsuited to India's diversity. Centralization has its advantages -- it is centralization that permits the creation of large organizations and even empires, such as the British Empire, which eventually came to rule over a significant fraction of the world. But when centralized empires collapse, it leads to chaos as new leaders emerge. Autonomous villages are limited in size and power, but if any one village (or the reigning regional empire) collapses, there are few shocks, if any, to the social system as a whole. This is how the culture of India has survived with little change for many centuries. Loosely-coupled systems and societies are stable and long-lasting, and permit a degree of diversity that cannot be imagined in large, monolithic societies and systems.

When the British took over India, they imposed their centralized, monolithic organization and processes on a diverse, loosely coupled society. This has never worked well. After Independence, India was bitten by the Socialism bug, which, since it came from the West, was cut from the same cloth; it emphasized centralization and elimination of diversity. The many layers of bureaucracy are the result of having to create a centralized administrative structure for this vast land.

First, let's kick out the virus of socialism; apart from a few ideas that could easily have been derived from humanism, socialism has done far more harm than good. The additional baggage of centralization that came with socialism has done even more harm. There needs to be centralization of law and order enforcement -- we need to have uniform laws and rights for all, in theory as well as practice. And we also need to ensure that Indian citizens are permitted to move and settle freely throughout the land without fear or favor. Beyond this, administration needs to devolve to local units. The long arm of the Central Government needs to be shrunk by several orders of magnitude. We need more autonomy throughout India to reflect a diversity of culture and heritage that has withstood the test of time.

So I agree with Pitroda: we can and should use more technology to reduce corruption, but we also need to dismantle, redesign and reconstruct the adminstrative policies and processes in the country.

F100 CEOs don't Tweet (but do they Rock 'n Roll)

Who would you vote to lead the corporations of the present into the future: suits who network with their peers at a country club, nursing a glass of Scotch, or folks in jeans (or even or suit, maybe?) connecting across the globe through social media (Facebook, Twitter, Blogs, wikis, etc.)? Okay, my bias is showing here, but so what? And granted, it takes a whole lot more than being social media savvy to run a company -- today. Jonathan Schwartz, CEO of Sun Microsystems, has had a blog for years, dang it, and yet couldn't do better than sell out to Oracle. Then again, maybe he was so savvy, he sold out, and that was the best thing anyone could have done in the prevailing circumstances.


But consider this: today's corporation needs to keep hiring, especially at the entry level; a level made up mostly of young people, who tend to be social media savvy. These are the people who will eventually end up at the uppermost rungs of the corporation and run it. And smart CEOs will in fact go out of the way to hire young people who get New Media. For it's increasingly clear that the future of business -- and society (and even nations: check out all the Tweets coming out of Iran) -- will rest increasingly on the ability to tap into and take advantage of social media. Anyway, that's what I believe, and I might try to justify this in another post, or just point the reader to people like Clay Shirky and Chris Anderson and Siva Vaidyanathan who've likely already done so (see Kevin Kelly).

So it's interesting to learn from UberCEO that the CEO's of Fortune 100 companies don't get social media. Or maybe the get it, but aren't social media savvy. Or perhaps they're savvy but are waiting for a strategic moment to make their entry. Or are so smart that they know that using it really doesn't serve much of a purpose in their businesses.

Hmm ... I doubt that they are that smart, or else one of the traditional guys -- Barnes and Noble or Borders -- would have started something like Amazon (which is now eating their lunch). From the report:
  • Only two CEOs have Twitter accounts.
  • 13 CEOs have LinkedIn profiles, and of those only three have more than 10 connections.
  • 81% of CEOs don't have a personal Facebook page.
  • Three quarters of the CEOs have some kind of Wikipedia entry, but nearly a third of those have limited or outdated information.
  • Not one Fortune 100 CEO has a blog.
I can understand them not having a Twitter account -- even I'm still flailing about doing my best to get it (ain't givin' up 'til I do). Not that I'm a barometer for this sort of thing, but still. But no blog? Dude, blogs are so 1999, and you still ain't there yet? I'd have expected them to have at least hired a 20-year old to do social media on their behalf. I guess they couldn't find one who got it and was okay with wearing a suit too.
Wikipedia had the highest level of engagement among the Fortune 100 CEOs, yet 28% of those entries had incorrect titles, missing information or lacked sources.
No excuses, babe. Even an old-time marketer will tell you that if you don't actively manage your public image, you'd better accept whatever comes up out there.

Given that F100 CEOs seem to be Old School Tie types, it's rather telling that there are more of them on Facebook (which is swarming with kids) than on LinkedIn, a professional networking service -- so are the CEOs stalking kids on Facebook? Man, that ain't even funny.

Now Facebook is about networking, but that doesn't necessarily mean a place where you post pictures of last night's drunken revelries. Stanford now is experimenting with holding office hours on Facebook, goshdarn it! First guy out was Prof. Phil Zimbardo, he of the infamous prison experiment. Not exactly a spring chicken, but certainly an out-of-the-box thinker. So it's not about one's age; it's an attitude thing. The New England Journal of Medicine has a presence on Facebook too. If these two very traditional institutions get it, there's no reason why F100 CEOs shouldn't be out there.

There are CXOs who Twitter, but they're pretty far removed from the F100 bubble. Here's a (pretty long) list. More than likely, they're relatively young, and are into tech, or are tech-savvy (e.g, @werner - Werner Vogels, CTO of Amazon and @vivek -- Vivek Ranadive, Founder & CEO of TIBCO). And then there are the thought leaders of the new Techno-Business-Cultural Zeitgeist, people like: Clay Shirky, Chris Anderson, Tim O'Reilly, Dave Winer, Siva Vaidyanathan, Malcolm Gladwell, Al Gore, Lawrence Lessig -- people of considerable influence especially among those under 30. Or even 40. Despite most of them being over 50. They get it.

Here are more detailed data from the study:

Whaddaya think?

Pre:iPhone::Mac:PC - It's like old times all over again

Reports coming in from across the web suggest that Palm's Pre smartphone is an outstanding device, and in many ways superior (slider keyboard, wireless charging, camera flash, background applications, web integration, removable battery, and slick, gesture-based user interface) to Apple's iPhone. While Apple's newly introduced iPhone 3GS ups the ante, there are sufficient reasons for people to stick with the iPhone. Back in the day, Apple's Macintosh, introduced in 1984, attracted a loyal following but the bulk of the market was owned by far inferior IBM/Microsoft PC, mainly for two reason: the lower price, and the availability of many applications.


So while the Pre is a beautiful device that can hold its own against the iPhone, it has thus far been able to garner only a few dozen applications, in comparison to Apple's 50,000 and counting. That difference is going to sway the prospective smartphone customer towards the iPhone, however much she may like the Pre, and so the iPhone is definitely going to have more numbers -- and hence greater profits for the application developer -- than the Pre.

Apple learned its lesson -- it's retained its design cachet, and it now has the application portfolio: an unbeatable combination. All hail thee, St. Steve!

Price, Aesthetics, Functionality

Back in the paleolithic age of personal computing, Apple was among the first out the gate in the mid-1970's, and then established a strong presence through its open hardware architecture (which was mostly a product of founder Steve Wozniak's philosophy). Then in 1981, IBM came in like a tsunami and swept away the PC market from Apple. IBM's message was: we're the Big Serious Computer Company, not a bunch of bearded hippies like the other guys. Apple came back with the paradigm busting Macintosh and quickly built up a fanatical following. Apple was big on aesthetics and usability. The IBM-Microsoft product was a sloppy apology, but it had the right combination of lower price and a vast ocean of applications that the Mac could only dream of. Price and functionality won over design aesthetics and usability. In 1996 before he returned to Apple, Steve Jobs declared,
If I were running Apple, I would milk the Macintosh for all it’s worth–and get busy on the next great thing. The PC wars are over. Done. Microsoft won a long time ago.
What nobody realized at the time was that Jobs was plotting his revenge -- and that he had learned his lessons well from the Mac-PC wars. Fast forward a decade to 2007, and Apple introduces the iPhone. Well known industry expert John Dvorak declared it dead in the water. Aesthetically, the iPhone was the Mac's spiritual successor -- and indeed, in its refinement, outshone it in every department. There was feeling going around, however, that it would suffer the same fate as the Mac -- a fine device worthy of being an art exhibit, but overpriced and likely an also ran in a crowded cellphone market. It didn't escape Jobs' sharp intellect that applications were what made the PC a rip-roaring success. At the same time, his high personal standards refused to permit him to put out an aesthetically inferior product. As he observed, of products coming out of Apple before his celebrated return in 1997,
"The products suck! There's no sex in them anymore!"
-- On Gil Amelio's lackluster reign, in BusinessWeek, July 1997
Hence, rather than compromise, Apple came out with an elegant product and aggressively promoted a market for iPhone applications -- which now number over 50,000. When the 3rd generation iPhone 3GS debuted in stores last weekend, over one million sets were sold.

There is a lesson here for the marketing of functional digital products. There appear to be three important dimensions of a product or service: Price, Aesthetics, and Functionality. The IBM PC was scored low on aesthetics but its scores for Price and Functionality were high. It won over the Mac's high aesthetics but low score on Price and range of apps (Functionality). The iPhone is scores moderately on Price (neither too high, nor low), but scores very highly on Aesthetics and Functionality (Apps). No wonder, then, that it's beating the pants off the competition; it's like buying a high-end PC and getting a Mac for free. Who wouldn't go for it?

Tuesday, June 23, 2009

Innovation and evolution: winning the battle, losing the war

Life itself is a chancy thing, but like love, the pursuit of innovation, is among the most fickle and heartbreaking means to spend one's time on earth. And just as nobody ought to fall in love for any reason but to love, it is wise never to consider innovating unless the process of innovation is itself intrinsically appealing and satisfying to the heart. Whatever comes out of the process ought to be treated as a bonus.


Millions of organisms, many, beautiful to behold have become extinct over the eons, and thousands more face extinction everyday. They have reached an evolutionary dead-end, not because of any intrinsic shortcoming, but because they no longer satisfy the criterion of fitness with their environments. It is Richard Dawkins' Blind Watchmaker at work, dispassionately removing those organisms that no longer fit into the grand scheme of things, regardless of their intrinsic merit.

The same sort of phenomenon is observed with respect to innovations, except that the ecosystem in which they emerge (or in which they face extinction) is human society and not the natural world. There is a certain tragic quality to this state of affairs: On the one hand, it is in human nature to try to perfect any artifact that emerges out of the imagination, and in the case of complex artifacts such as advanced technologies, such perfection requires intense, repetitive effort, and many iterations and years before any level of perfection is achieved. On the other hand, the ecosystem is an unfeeling context which cares little for human aspirations regarding the perfection of innovations. At any moment, an artifact or technology can be rendered unfit (in an evolutionary sense) because of changes in the environment, especially the emergence of competing technologies; and however wondrous and intricate it may be, further development ceases abruptly, and it is left to be mourned only by its sometimes resentful inventors, but is cruelly forgotten, or even ridiculed, by the masses. Not long after, it begins to appear quaint, archaic, obsolete, something that would never have had a chance to survive, anyway.

Charles Babbage's complex Difference Engine, a mechanical calculator made up of thousands of gears that he designed in 1822, is one such example. Machines constructed from his designs are beautiful to behold and demand considerable, precision effort. But they are far more expensive to build and far less capable and accurate than any modern, digital, hand-held calculator. The mechanical computers were great innovations, but in retrospect would never have been able to scale up. A very similar example is that of beautiful, expensive, high precision mechanical watches, and their far less expensive, much more mundane, but nevertheless far more accurate digital successors. Mechanical winding watches have been relegated (or perhaps, elevated) to the status of expensive fashion accessories, and even jewellery whose primary function is that of an adornment and an object of wonder rather than a device that tells the time.

Over and over again, Mother Nature seems to have been right, but her correctness is perceivable only in retrospect, and often only after an interminably long period.

I came across two other instance of Cruel But Always Correct Nature: the American Apollo Space Program and Blu-Ray optical disks. I was surprised to see the first described as some sort of a failure -- an evolutionary dead end -- and to see the latter as obsolete already; but upon reading through the articles, I understand and agree with the sentiments expressed. The US space program launched in right earnest, thus:
America had been inspired by President Kennedy's wish, announced in 1961, of "achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to Earth." After his assassination in 1963, the idea became a homage to him, a way of showing the world what the United States would have achieved had he lived. Within days of the Apollo 11 astronauts' safe return to Earth, someone put a message on Kennedy's grave: "Mr President, the Eagle has landed." Job done, in other words.
By 1969, the task was done, and the last moon mission occurred in 1972 - Apollo 17; the next three missions were cancelled, and that was that. The space mission was launched not so much to land a man on the moon as to tell the world that the US could beat the USSR at its own game. And it did -- at immense financial and personal cost:
It was also an extraordinarily expensive project, it should be noted. The entire Apollo programme cost $24bn in 1960s money - around $1 trillion in today's - and for several years was swallowing up almost 5 per cent of the US federal budget. In addition, there was also a considerable emotional cost to the missions, a point stressed by Christopher Riley, co-producer of the 2007 documentary In the Shadow of the Moon. "A great many Americans suffered premature heart attacks and strokes from their efforts in making the Apollo project succeed. More than 400,000 workers were employed by private contractors to build capsules, rocket engines, space suits, and computers for Apollo and the vast majority worked flat out, over weekends and holidays, much of the time for free, for several years to make sure the programme succeeded."

For example, at the Grumman factory in New Jersey, where the lunar module was built, staff would clock off at 5pm, leave by the front door, walk round to the back and work for free until midnight. Similarly, employees at the International Latex Corporation - which made the suits worn by the Apollo astronauts - worked with equally obsessive intensity. In a recent documentary, the company's senior seamstress, Eleanor Foraker, recalled working 80-hour weeks without days off or holidays for three continuous years, suffering two nervous breakdowns in the process. "I would leave the plant at five o'clock in the morning and be back by seven. But it was worth it, it really was."
Looking back, it appears that the Apollo mission was destined to be an evolutionary dead-end. It achieved its principal purpose, and then it became clear it was fit for little else:
In the end, the real problem for Nasa is that it did the hardest thing first. Kennedy's pledge to fly to the moon within a decade was made when its astronauts had clocked up exactly 20 minutes' experience of manned spaceflight. "We wondered what the heck he was talking about," recalls Nasa flight director Gene Kranz. To get there before the Russians the agency was obliged to design craft that were highly specific to the task. Hence the Saturn V, the Apollo capsule and the lunar module. Unfortunately, these vehicles were fairly useless at anything else in space - such as building a space station - and Nasa, having nearly broken the bank with Apollo, had to start again on budgets that dwindled dramatically as the decades passed.
The article concludes:
The conclusion is therefore inescapable. Kennedy's great vision and Armstrong's lunar footsteps killed off deep-space manned missions for 40 years - and probably for many decades to come. As DeGroot says: "Hubris took America to the moon, a barren, soulless place where humans do not belong and cannot flourish. If the voyage has had any positive benefit at all, it has reminded us that everything good resides here on Earth."
The other example here is the Blu-Ray optical disc which was created by an industry consortium as a successor to the ubiquitous DVD disc. As compared to the DVD's 4.5 GB capacity, the Blu-Ray has a far higher capacity of 25 and 50 GB. The Blu-Ray competed for market space with Toshiba's HD-DVD and won the format wars. But not for long, it would appear. While scientists were focused on a successor to the DVD (which in turn succeeded the Compact Disc), the ecosystem changed around them and direct digital downloads over the internet are growing in popularity along with increasing bandwidth connections. The Blu-Ray has won the battle, but it may eventually lose the war. It's not clear if Blu-Ray's developers will ever recoup their developmental expenses. Blu-Ray players are priced high to recoup the cost, but this very situation militates against their rapid market penetration, especially during a recession. From the article:
Blu-ray will doubtlessly continue to grow in popularity as more of us buy large HD-capable flat screen televisions. In the same vein, it’ll continue to make inroads in computing and video gaming markets. But it’s a case of too little, too late, as long-term trends point to a slower uptake than DVDs ever had. When we can simply download a good-enough copy of a movie from iTunes and save it to a USB drive or mobile device for viewing pretty much anywhere, why would we even bother with a power-hungry, noisy, expensive and frankly inconvenient disc in the first place?

By the time most consumers have asked themselves this question, the answer will already be in: Optical discs are a fading technology, and investing in them now could be a shorter-term move than you might have initially anticipated.
Your typically management tome might ask 'managers' to perform some sort of 'strategic analysis of the market/technology landscape' by employing a sainted analytical tool which gives cute names to different cells in a matrix. The reality is that no management theory can beat large-scale paradigm shifts, or even demands of the moment. Oftentimes, you do what you are able to do, and hope the ecosystem will continue to support you, even while placing a few bets on the side on other, alternatives whose prospects currently seem remote.

Embracing the gorilla: Government sponsored innovation

At least to me, the word 'government' brings up images of decrepit offices, musty files, endless ennui, inefficiency, corruption, obdurate stupidity, rule-oriented behavior, ... a long list of negatives Certainly nothing remotely redolent of innovation. If anything, one would imagine government departments and officials more than likely to destroy any innovation or creativity that is taking place, running all over it like a blind and drunken elephant.


Truth is, governments have played important roles in spurring innovation. The Advanced Research Projects Agency (ARPA, also known as DARPA) of the US Department of Defense was behind numerous innovations in information technologies, including the grandaddy of them all, the Internet. Of course, one needs to read the fine print here: DARPA's role was to set policy, find very smart and talented people, give them money and freedom, and then get out of the way.

Without the US government's involvement, it is unlikely that computer and communication technologies would have advanced as far as they have today. Governments alone have the wherewithal to provide funds and facilities for large scale innovations to occur, and also the patience to wait for long-periods without expecting any tangible return. DARPA, and hence, the US space program (including NASA) was a direct outcome of the USSR's launch of the Sputnik satellite, which in turn led, in matter of just a decade, to the first human landing on the moon. The US space program has yielded hundreds of innovations, initially intended to solve problems related to the space mission.

It's quite clear that massive, societal-level changes in attitudes and initiatives focused on innovation require the involvement of the government. It also clear that involving the government is like dancing with a gorilla: you just need to make sure you don't get killed in the gorilla's crushing embrace. DARPA is an outstanding model of governmental involvement in innovation, and the reasons for its success are outlined as follows:
  • Small and flexible: DARPA has only about 140 technical professionals; some have referred to DARPA as “100 geniuses connected by a travel agent.”
  • Flat organization: DARPA avoids hierarchy, essentially operating at only two management levels to ensure the free and rapid flow of information and ideas, and rapid decision-making.
  • Autonomy and freedom from bureaucratic impediments: DARPA has an exemption from Title V civilian personnel specifications, which provides for a direct hiring authority to hire talent with the expediency not allowed by the standard civil service process.
  • Eclectic, world-class technical staff and performers: DARPA seeks great talent and ideas from industry, universities, government laboratories, and individuals, mixing disciplines and theoretical and experimental strengths. DARPA neither owns nor operates any laboratories or facilities, and the overwhelming majority of the research it sponsors is done in industry and universities. Very little of DARPA’s research is performed at government labs.
  • Teams and networks: At its very best, DARPA creates and sustains great teams of researchers from different disciplines that collaborate and share in the teams’ advances.
  • Hiring continuity and change: DARPA’s technical staff is hired or assigned for four to six years. Like any strong organization, DARPA mixes experience and change. It retains a base of experienced experts – its Office Directors and support staff – who are knowledgeable about DoD. The staff is rotated to ensure fresh thinking and perspectives, and to have room to bring technical staff from new areas into DARPA. It also allows the program managers to be bold and not fear failure.
  • Project-based assignments organized around a challenge model: DARPA organizes a significant part of its portfolio around specific technology challenges. It foresees new innovation-based capabilities and then works back to the fundamental breakthroughs required to make them possible. Although individual projects typically last three to five years, major technological challenges may be addressed over longer time periods, ensuring patient investment on a series of focused steps and keeping teams together for ongoing collaboration. Continued funding for DARPA projects is based on passing specific milestones, sometimes called “go/no-go’s.”
  • Outsourced support personnel: DARPA extensively leverages technical, contracting, and administrative services from other DoD agencies and branches of the military. This provides DARPA the flexibility to get into and out of an area without the burden of sustaining staff, while building cooperative alliances with its “agents.” These outside agents help create a constituency in their respective organizations for adopting the technology.
  • Outstanding program managers: The best DARPA program managers have always been freewheeling zealots in pursuit of their goals. The Director’s most important task is to recruit and hire very creative people with big ideas, and empower them.
  • Acceptance of failure: DARPA pursues breakthrough opportunities and is very tolerant of technical failure if the payoff from success will be great enough.
  • Orientation to revolutionary breakthroughs in a connected approach: DARPA historically has focused not on incremental but radical innovation. It emphasizes high-risk investment, moves from fundamental technological advances to prototyping, and then hands off the system development and production to the military services or the commercial sector.
  • Mix of connected collaborators: DARPA typically builds strong teams and networks of collaborators, bringing in a range of technical expertise and applicable disciplines, and involving university researchers and technology firms that are often not significant defense contractors or beltway consultants.
DARPA's success comes from a combination of, among other things, outstanding minds, individual empowerment, non-bureaucratic organizational structure, operational flexibility, challenging goals, and a large source of funds: strength, flexibility, focus, power. If a government is to be involved in innovation, we must take only the best of what a government can offer and ruthlessly discard everything else, especially the governmental penchant for interference and sloth. The organization needs to be a true and unapologetic meritocracy, unfettered by any other governmental or social policy and unwaveringly committed to achieving national innovation goals.

DARPA is over 50 years old, and yet it's contribution is largely unknown to the general public in the US -- or anywhere else for that matter. And perhaps that is how it should be; it's better for the public's gaze to be focused on the innovations themselves rather than the organization responsible for it, as long as financial and political support for the organization does not flag. Proof of its general invisibility is seen in this recent New York Times article entitled, Can governments till the fields of innovation? that does not mention DARPA even once. Just a cursory glance of DARPA's contributions will reveal the answer to question to be an unequivocal and resounding 'YES!'.

The NYT article says,
But governments are increasingly wading into the innovation game, declaring innovation agendas and appointing senior innovation officials. The impetus comes from two fronts: daunting challenges in fields like energy, the environment and health care that require collaboration between the public and private sectors; and shortcomings of traditional economic development and industrial policies.
Not true -- governments have been involved in the innovation game for decades, perhaps for centuries. When a monarch granted a talented individual a commission to produce a work of art or engineering or anything else, that was an example of government involvement in innovation. Japan's Ministry of International Trade and Industry (MITI) spearheaded that nation's 5th generation computer initiative in the 1980s. Defence industries throughout the world -- even in the US -- rely almost entirely on government largesse and involvement to innovate. The various government labs around the world are testament to deep governmental involvement in innovation. It can be said, however, that a move is being made to expand the role of government in innovation beyond a few narrow fields to a much wider swath of domains, and focus not only on technologies, but perhaps also processes and methods -- and in a more visible and public manner. This latter part is what gives me pause -- will innovation be crushed in the powerful embrace of the gorilla?

India has set up a National Innovation Foundation stewarded by noted scientist Dr. R A Mashelkar. I don't have any hard evidence, but NIF doesn't appear to be another DARPA -- it may well be far more of a bureaucratic government organization. Now ,NIF does have an entirely different focus: while DARPA focused on strategic, foundational technologies with long-term, and large scale implications, NIF's byline says, In support of grassroots innovations. This is an interesting approach, and I suppose we need both kinds of innovations -- bottom-up and top-down. The latter brings the most educated and talented minds to bear on complex goals, while the former -- in principle -- provides avenues for everyday innovation to sprout and therefore, a culture of innovation to be inculcated in Indian society at large. At least, that's how it ought to be. Mashelkar makes an important point:
“If you make something for the rich, the poor cannot afford it,” Mr. Mashelkar said. “But if you design for the poor, everyone can afford it.”
Should it be left to NGOs, though? The Honey Bee Network founded by IIM Ahmedabad professor Anil Gupta is one such. But a large scale societal impact need the heft and power of a government -- without, of course, its potentially deadly embrace. In the US, Dr. John Kao, a man of extraordinary talent and with an unusual background has established the Institute for Large Scale Innovation (ILSI) to address the following questions:
  • How can innovation be fostered most effectively at a societal level (country, region, city)?
  • How can innovation be harnessed to address complex global challenges, and how should innovation stewardship work at a global level?
  • How can international collaboration and alignment be encouraged in explosively growing new areas of scientific, technological and human design innovation, such as cleantech, nanotechnology and others?
ILSI is not a government department; rather, it is a private non-profit (sponsored by Deloitte LLP) and run by a reputed, charismatic, individual who is using his influence to shape innovation. It is not unlikely that government agencies will collaborate with ILSI. Here's what ILSI seeks to do:
  • Supporting the emergence of a network of significant innovation leaders with the influence to provide a meaningful stewardship function for innovation at national and international levels.
  • Developing agenda-setting intellectual capital that defines large-scale innovation and leads to the development of meaningful tools and best practices.
  • Creating high quality learning experiences relevant to the next generation of innovation leaders.
  • Underwriting research that documents the emerging global innovation economy, key innovation flows as well as new competitive dynamics and opportunities.
This is an interesting model with far more likely to succeed than government agencies. Will it work in India? I think so. There are numerous talented and motivated individuals in India who are yearning to foster innovation and constructive change in society, beyond the usual political and ideological engagement if leftists groups who have a narrow and hidebound definition of innovation. Of course, the social, cultural and economic situation in India is far removed from what prevails in much of the West. Nevertheless, there is a new class of business leaders such as Nandan Nilekani of Infosys and others that are genuinely engaged in the process of grassroots transformation, and likely will be willing to underwrite such initiatives.

Government involvement is no doubt necessary, but we ought to choreograph a dance in which the gorilla maintains a safe distance.

UPDATE June 24, 2009: Here's an example (Crunchgear) of largescale government involvement in innovation:
The U.S. Government created a requirement that by 2020, the majority of cars sold here must get at least 35 miles per gallon. This requires a big commitment on the part of auto makers and so the Energy Department was authorized last year to lend $25 billion dollars. The first round of financing is expected to be announced today with Ford, Nissan, and Tesla getting all getting a sizable chunk during this first round. GM and Chrysler both wanted a bunch of money too, but neither fit the criteria of being a “financial viable” so they were disqualified for this first round.
So, the government sets an agenda and establishes some goals that are of national interest. Since achieving these goals requires massive investment of the kind that is unlikely to be of immediate benefit to a for-profit corporation, the government provides the money, probably establishes some oversight, and then keeps its stinkin' hands off.

Zong, Boku, Obopay: The odd new sound of money

In the evolutionary timeline of exchange of goods and services, barter is the grandmommy of them all, followed by a variety of tokens such as seashells and pebbles and then gold and silver coins, followed by promissory notes, paper currency, bank checks and drafts and then credit cards -- the last, perhaps, being largely responsible for the recent recession. In the evolutionary process, we moved from material objects to informatory marks on sheets of paper and then digital data enshrined in plastic cards. Now, perhaps, it's time to do away with plastic altogether (thereby ending the long-standing tradition in US retail of asking the customer -- paper or plastic? The new answer is: neither). Your new smartphone (that's why it's smart) now becomes your means of effecting financial transactions.


There's Obopay, Boku, and Zong (continuing the tradition, started perhaps by Yahoo, of having silly names for internet companies). The Zong site asserts that there are more people with mobile phones than credit cards, so it makes sense to use those devices as a means for money transfer. Zong also says that people are more likely to buy using a phone number since the process is less painful, not requiring the usual information (address, expiration date, card number, etc.).

While all three are touted as means of purchasing goods and services, they seem to be focusing principally on the one-way transfer money to others rather than on two-way transactions (money one way, goods/services the other) since the latter part is really out of their control. As with services like PayPal (now owned by eBay), verification methods for the latter will emerge soon enough.

It used to be that you could either carry cash in your pocket, or resort to some sort of delayed/deferred payment method. Now, as long as everyone carries a mobile phone, it is no longer necessary to carry cash any more. Except, of course, for any charges that the mobile carriers would tack on -- which they will, with alacrity, since it is likely to explode into a major revenue source. I expect mobile carriers to initially behave greedily, taking a big cut, since gouging has been one of their well-established behavior-patterns. But given the convenience, these methods are likely to catch on across the socio-economic spectrum.

Mobile money may be the killer app that convinces the last holdouts to go for a cellphone. Certainly, mobile phone penetration, growing very rapidly in India and China, should see a spike with the introduction and use of MoMo (if I may coin a term).

Monday, June 22, 2009

Search evolves: Google, Alpha, Hunch (& Bing?)

Search has become to the World Wide Web what word processing was (and largely is) for desktop computing: the single most important application. Before the advent of the Web (and the proliferation of networking) the most common use for personal computers was word processing; the purchase of a PC was justified on the basis of just this one application (and, perhaps, spreadsheets, among the bean-counting set). Now nobody I know justifies buying a PC in order to do searches on the internet; rather, people are interested in email and in surfing the web. But you can't just 'go surf the web': you need a place to start. And that place invariably is the search box of a Search Engine.


While it may not be the First Internet Search Engine Ever (FISEE?), Yahoo! was the first successful and iconic one, which made its founders very wealthy and caused venture capitalists to open up their wallets and purses to hordes of prospective Web 1.0 success stories. As I recall, back in 1993, when yahoo was still a graduate student thesis in Stanford's computer science department, it started off as merely a list of all existing websites (of which there were fewer than 100 then). As it evolved, it became a combination of a search box and a list of potential links that surfers could visit by directly clicking on them. Yahoo! both categorized websites and also allowed free form search.

Yahoo! was followed by up to a dozen other search engine efforts that used different algorithms but presented the same paradigm of a search box and a list of categorized links.

Google, another Stanford computer science production, went the other way, simplifying the interface down to its bare essentials: all that the user saw was a single search box into which a search term was typed and in response to pressing the enter key, the search engine presented a massive list of hits ordered using proprietary, and frequently tweaked 'Page Rank' algorithms. A later player in the game, Microsoft, copied the same approach, but used different algorithms. The Big Three of Search: Google, Yahoo and Microsoft, have dominated the search space and have used largely the same approach of presenting the user with a long list sites ordered by some criteria. This situation has persisted for nearly a decade.

Come 2009, two new upstarts have surged onto the stage employing radically different approaches: Wolfram Alpha and Hunch. (There is a fourth entrant, Bing, but more about them later.) Alpha and Hunch represent completely new approaches that break away from the dominant paradigm represented by Google and Yahoo (and even Bing, for that matter). Alpha and Hunch embody the first entirely new innovations in search in nearly a decade. Alpha, in fact, is not search at all, strictly speaking: it is a "computational knowledge engine". Alpha computes answers from a vast, and rapidly growing database maintained and curated by sophisticated professionals. Alpha focuses on facts, and systematically organized knowledge. From a blurb on its website:
Wolfram|Alpha's long-term goal is to make all systematic knowledge immediately computable and accessible to everyone. We aim to collect and curate all objective data; implement every known model, method, and algorithm; and make it possible to compute whatever can be computed about anything. Our goal is to build on the achievements of science and other systematizations of knowledge to provide a single source that can be relied on by everyone for definitive answers to factual queries.
Alpha tries to answer any question that requires computation of some sort. Sample topics include:
  • Mathematics
  • Engineering
  • Dates & Times
  • Physics
  • Money & Finance
  • Units & Measures
  • Chemistry
  • Places & Geography
Hunch, goes in pretty much the other direction. It relies on crowdsourcing to gather knowledge and then answer questions input by users. Again, from the blurb:
Hunch is a new way to help people make all kinds of decisions, such as:
  • Where should I go on vacation?
  • What's the best US college for me?
  • What kind of smartphone is right for me?
  • Which museum should I visit in the Netherlands?
  • What blog should I read?
Results are based on the collective knowledge of Hunch's users. Hunch already has more than 2,500 possible topics, and Hunch users add new topics every day.
Hunch works off the same paradigm as Wikipedia, while Alpha's approach is more Encyclopedia Britannica, which is ironic, since the former seems to have beaten the latter in the encyclopedia game. But both likely will have their own constituencies and will complement rather than supplement each other. Google doesn't work to create knowledge bases: instead, it indexes every page that comes online and is publicly accessible and performs lexical searches in its search index in order to deliver responses to search terms. Note that Google is search term oriented -- queries are not asked, but kind of implied. Alpha and Hunch, on the other hand, are query oriented. They explicitly assemble knowledge and attempt to answer only that subset of queries that match the data available in their knowledge bases. Only Alpha systematically organizes and curates its knowledge.

UPDATE June 24, 2009: To (over)simplify,

Alpha says,
As long as it is something computable, if I understand your question, I can figure out the answer. And only I. I'm not going to let the riff-raff mess with my pure and elegantly computed knowledge. And any question I cannot answer isn't worth answering.
Hunch says,
I'll ask everybody what the answer is and maybe they'll know. Please ask everybody to join in the fun and together we'll be able to answer all of everybody's questions. Maybe. Eventually.
And Google says,
Let me parachute you into my vast junkyard, somewhere in the vicinity of where I think you might find the answer to your question. Or not. And if you don't find it today, come in tomorrow. Or the day after. Keep coming, you might find something you like. Or look at everything differently, and maybe you'll find your answer in there someplace.
Both Alpha and Hunch were founded by well-known (and colorful) personalities from the world of computing and the internet: Alpha by Stephen Wolfram, the creator of Mathematica, and Hunch by Caterina Fake, who founded Flickr, later acquired by Yahoo! The core computational engine used by Alpha is Mathematica, while the Web 2.0 ideas that were first seeded in Flickr form the basis for Hunch. It appears that powerful ideas cast long shadows.

I don't see either Hunch or Alpha replacing Google. Given that vastness of the web universe, there is far more out there than can be captured within finite (if growing) knowledge banks such as those of Hunch and Alpha. But clearly, if Hunch and Alpha can answer 80% of the questions people ask everyday (prices, schedules, feature comparisons, etc.) then they can capture significant chunks of the search market, and still leave a lot of room for Google to sweep up all the rest. It is also possible that Google (or Bing, or Yahoo) will buy out Hunch and integrate it into their search engines. It's unlikely that Wolfram will sell Alpha since it is founded on Mathematica.

The good news, then, is that innovation is far from dead in the search segment. Perhaps there is a lot to come in this area in the coming years.

And oh, about Bing -- it seems stuck in the old paradigm for an engine being presented as something new in 2009; unlike Hunch and Alpha, it's trying to out-Google Google. I'm not sure Microsoft's old trick will keep working over and over again.

Constructive Stupidity

Here are some definitions of stupid I found on the web:

  • annoying or irritating; troublesome: Turn off that stupid radio.
  • lacking ordinary quickness and keenness of mind; dull.
It's quite likely that many of this blog's readers have found something, at some time to be "stupid" -- it doesn't make sense, or it is inappropriate, suggesting that the mind that came up with the idea must be somehow inferior. The thing is, a lot of things that once seemed 'stupid' to someone are deemed perfectly reasonable today, like: flying to the moon (or even flying, at all); women attending college (or even voting); trying to achieve 100% literacy in a society; traveling long distances in a single day; curing many diseases once considered fatal ... the list of stupid once upon a time things is long and will continue to grow longer, as many of today's prejudices become tomorrows accepted practice.

Nevertheless, the stigma associated with the term 'stupid' is so strong, that most people would dare not even attempt anything considered by the majority to be stupid. It therefore requires a certain kind of person, impervious to skepticism, criticism, ridicule, censure, or even ostracism, to go against the grain and willingly undertake stupid initiatives. Some of these people might be true cranks, but a whole lot of them are intelligent, persistent, strong-willed souls, seekers of the unknown, who tilt at windmills to make the stupid commonplace and accepted. Some go even further and wear the mantle of stupidity which then on gives them permission to pursue their 'stupid' obsessions.

Which brings me to this fine essay called The importance of stupidity in scientific research by Martin Schwartz of the University of Virginia. Schwartz writes about meeting an former graduate student colleague of his who, although he considered her to be very, very smart, dropped out of the PhD program and became a very successful lawyer instead. Her explanation for dropping out was that the program made her 'feel stupid'. He writes,
I had thought of her as one of the brightest people I knew and her subsequent career supports that view. What she said bothered me. I kept thinking about it; sometime the next day, it hit me. Science makes me feel stupid too. It's just that I've gotten used to it. So used to it, in fact, that I actively seek out new opportunities to feel stupid. I wouldn't know what to do without that feeling. I even think it's supposed to be this way.
He explains that scientific research is hard, since it involves explorations into uncharted terrain and is therefore expected to make one feel stupid. In order to make discoveries, one stays with the feeling of stupidity until insights begin to emerge.
... we don't do a good enough job of teaching our students how to be productively stupid – that is, if we don't feel stupid it means we're not really trying. I'm not talking about `relative stupidity', in which the other students in the class actually read the material, think about it and ace the exam, whereas you don't. I'm also not talking about bright people who might be working in areas that don't match their talents. Science involves confronting our `absolute stupidity'. That kind of stupidity is an existential fact, inherent in our efforts to push our way into the unknown. Preliminary and thesis exams have the right idea when the faculty committee pushes until the student starts getting the answers wrong or gives up and says, `I don't know'. The point of the exam isn't to see if the student gets all the answers right. If they do, it's the faculty who failed the exam. The point is to identify the student's weaknesses, partly to see where they need to invest some effort and partly to see whether the student's knowledge fails at a sufficiently high level that they are ready to take on a research project.
I like his coining of the expression productive stupidity:
Productive stupidity means being ignorant by choice. Focusing on important questions puts us in the awkward position of being ignorant. One of the beautiful things about science is that it allows us to bumble along, getting it wrong time after time, and feel perfectly fine as long as we learn something each time. No doubt, this can be difficult for students who are accustomed to getting the answers right. No doubt, reasonable levels of confidence and emotional resilience help, but I think scientific education might do more to ease what is a very big transition: from learning what other people once discovered to making your own discoveries. The more comfortable we become with being stupid, the deeper we will wade into the unknown and the more likely we are to make big discoveries.
There is practically no essential difference between this part of the process in science or in any creative endeavor. Creativity and innovation are excursions into the unknown; there is often no map to guide one's efforts, and there needs to be much speculation, guessing, and trial and error. This is how discoveries happen, this is the road that leads to that supremely satisfying aha! that makes the entire enterprise worthwhile, providing intrinsic rewards for one's efforts.

Those who are consistently creative and also the best of scientists are never afraid to feel or appear stupid. Being stupid is often the first step to challenging the status quo and constructing entirely new paradigms.

So let's come up with a new ritual for ourselves: every morning, let's look at ourselves in the mirror and say out loud: I am going to think or do something really stupid today, and that makes me feel so good!

Saturday, June 13, 2009

Here's to the crazy ones (old Apple ad)

You've got to love a company that celebrates the misfits who made it anyway, transforming life as we knew it, unlike the other guys whose heroes come sporting suits and ties, and a photograph of them grinning a fake grin hangs on the wall outside a classroom named for them at Harvard Business School.

A New Social Media Order

I am not any fan of Socialism with a big 'S' -- indeed, of any ideology that seeks to enforce a social order, shape human behavior, and exercise control over legitimate and morally acceptable action by individuals. Over sixty years of so-called socialism has done as much, perhaps, if not more, harm to the nation and the people of India as any benefits it has wrought. The crimes committed by intelligent, educated and well-meaning persons are sometimes far more devastating and enduring than those of brutish tyrants; such, I believe, has been the fate of this great nation as it became subjected to alien ideologies that were concocted in distant minds without the consent and understanding of the vast majority of her people. Thus, when I read of Western neo-liberals cooing over the coming of the New Socialist Order via the internet I am torn between the desire to throw up and the need to rant and rave at evidence of rank stupidity. I will, instead, try to carefully explore the matter and ruminate over the grander social implications of the rapid growth of social media.

My son was studying for his economics examination the other day and the subject of collective bargaining came up. It is commonly expressed that man is a social animal. Whether the matter of being social is instinctual or learned from experience is typically never brought up. The assumption usually is that it is an instinct -- being social is what we are about -- and we'd better accept our lot. Far more interesting is the idea that we discovered the benefits of being social early on, perhaps while watching pack animals hunt together and succeed against much better-endowed prey. We learned that working as a group imbued us with a power that we don't possess at all as individuals; the process of organizing into a group itself generates a collective energy and strength that can intimidate and threaten relatively large and powerful opponents. Furthermore, we discovered that each individual had distinct, innate talents that if allowed to flower and develop would benefit not only that individual, but also the entire social group to which she belongs. So each of us enters into an unstated compact with our social group: you give me the time to pursue the development of my talents, and I'll give back things that would benefit you. In this manner, homo sapiens has carried the benefits of being social far beyond what nature may have had in mind (so to speak).

The benefits of organizing typically grow with numbers, but for any structural configuration or order a point is reached beyond which the Law of Diminishing Returns becomes operative: a lot of effort goes in merely to keep the social structure intact. When the costs of organizing exceed any benefits that it might deliver, organizing -- or at least, that particular organizational configuration -- loses its (originally intended) purpose. The organization may continue to exist for some time, but primarily due to inertia. Over time, the forces of entropy will likely tear it apart. It is before this point is reached that an entirely different structural paradigm is called for in order for groups of individuals to become effective. We see this in the way cults, kingdoms, states, armies, guilds and businesses, for instance, organized (and disbanded) over the centuries of recorded human history.

The modern corporation is the product of ideas of organization developed in the 20th century Europe. While the key concepts underlying bureaucracies have been in place since the times of the great empires of ancient history (e.g., Egypt, Rome, Persia, Mesopotamia, Maurya, Han), it is the work of Max Weber that have had the most influence on how large modern organizations are structured. These powerful ideas have helped shape the megacorps of the 20th century -- General Motors, Exxon, GE, Wal-Mart, IBM, Royal Dutch-Shell, the Tata, Reliance and others. Even Third Wave organizations such as Google, despite their many innovative philsophies of culture and process, borrow several key organzational elements from the corporate behemoths of yore; they don't represent entirely new organizational paradigms but modifications, sometimes significant, of the same old paradigm. Perhaps it was necessary to be thus organized because the socio-economic landscape at large was (and still is) located (I would say, mired) in the Old Order.

While individuals recognize and desire the social and economic benefits that social order and organization provide, among the greatest of drawbacks and dangers of organizations is their sometimes dehumanizing effects. Many choose to exit from large organizations for this reason, and many more stay on for the security but harbor intense resentment all the same, like a couple stuck in a bad marriage who remain together 'for the sake of the kids'.

Which brings us to subject of alternative organizing paradigms and the role of the Internet in making them possible. A first axiom of all social organization involves communication: all organization is founded on communication among members. Communication is integral to the process, product, raw material, and at least part a part of the outcome of organization. Communication is the lifeblood of organizations. Hence, no communication, no organization. While organization is far more than communication, the quality and structures of communication can (and does) determine its success or failure.

Human communication --and organization -- traditionally occurred face-to-face; it involved participants who were Colocated and communication occurred Synchronously - in real time. As intact groups developed, other methods of intra-group signaling evolved for group members to remain in communication even when they were not proximally located. Smoke signals may have been the earliest means of real-time telecommunication (Dislocated, Synchronous), while markers and cave drawings may have been used to leave messages that were received later (Colocated, Asynchronous). The advent of written language permitted messages to be delivered to recipients at other locations at later times (Dislocated, Asynchronous). All these various extensions to the primordial Colocated Synchronous process helped human social organization evolve into new forms that could be applied to new initiatives such as warfare, monument construction, knowledge development and dissemination, trade, religion and so on. The emergence of telegraphy and telephony in the 19th century expanded the scope of organizations and permitted them to grow massively in size and expand geographically. Social order and organizational paradigms were extended, but significantly were not radically altered.

Before the Web, was the WELL (and Usenet). It was only with the coming of computer communications, that entirely new forms of organizing people in a sustained manner became possible. The prototypical Third Wave social organization was The Well (Whole Earth 'Lectronic Link) founded by Steward Brand and Larry Brilliant in 1985 (a few years before the World Wide Web was born). For perhaps the first time, it was possible to take advantage of the emergent nature of social processes and the availability of computer linkages to meld groups of individuals who were widely dispersed, geographically, to create free-wheeling, evolving, social organizations. Notably, the WELL attracted individuals who were politically (and socially) liberal in their viewpoints. They liked the relatively anarchic nature of the community where members made up their own rules and norms rather than try to fit into some existing, inherited social ruleset. Many were current or former hippies (or hippie-sympathizers). Many were also technologically savvy. Here's a blurb from their website:
Welcome to a gathering that's like none other — remarkably uninhibited, intelligent and iconoclastic ... The regulars in this place include noted authors, programmers, journalists, activists and other creative people who swap info, test their convictions and banter with one another in wide-ranging conversations, using their real names.
The WELL served as a model for the tens and thousands of communities that now dwell on the Web; it was, in a real sense, the Mothership.

Web-based communication provides a wide variety of structural configurations for people to connect, communicate, organize, collaborate and create community (Oh, I so love alliteration!). In the beginning was email. Email permitted Dislocated Asynchronous, one-to-one and one-to-many communication. The next step in the evolution of internet communication was the construction of intact communities identified and embodied by the creation of online discussion groups. Each discussion group had an agenda and a set of interests, that attracted people with similar interests. All early communities (including the WELL and USENET newsgroups) were constructed out of the raw material of email. Community members sent and received postings in the form of email which were aggregated on servers as threaded discussions that retained the contexts of each specific conversation. While email allowed only scattered communication to occur, a threaded discussion retained the discursive structure of a conversation. From such humble beginnings we have come to the age of SMS, Chat, Twitter, Facebook, Blogs, Wikis (including Wikipedia), Skype, YouTube, Flickr, Slideshare, Digg, Delicious, Slashdot, Last.fm, SecondLife, Moodle, and MMORPGs.
''The whole Usenet phenomenon was one of the really early indicators of what was going to happen on the Web,'' said Dr. David Farber, a professor of telecommunications at the University of Pennsylvania. ''The incredibly dynamic discussion groups, the flames, the spamming, everything that's now considered a great and unique property of the Web, and everything that's considered a bad and unique property of the Web, was all there on Usenet.'' (NYT)
So what does all this stuff about Social Media have to do with any New Social Order? Is it possible for technology to transform the fundamental structure of human society? Let's take a look at a phenomenon that is unlikely to have occurred without the internet and social media (ancient or modern). This phenomenon, called Linux, is located in the context of a larger movement called Open Source. The Linux operating system is directed by an individual, Linus Torvalds, but the responsibility for developing and maintaining it rests with a vast community of thousands dispersed across the globe connected only through the internet, and most of whom have never met (nor likely ever to meet) in person. Similar processes are observed in most open source projects. Some key characteristics of community-based open source development include:
  • members self-select to join
  • membership is free and open: people may join or leave whenever they wish
  • group tasks are established through discussion and consensus-generation among members
  • members volunteer to accomplish various group tasks
  • there is little or no hierarchy
  • there are responsibilities and individuals who take ownership of responsibilities thereby becoming responsibility leads or point-persons
  • there are extensive discussions among members
  • members participate to whatever extent in and whatever manner they wish
  • the act of participation and contribution to group goals is its own reward
  • there are no material rewards, just the thrill of participation and possible peer-recognition
  • if there are disagreements, some members may 'fork' (branch away) a project and take it in a different direction, yet retain links to the mainstream group
  • a member's status in a group is built entirely through merit and contribution
  • practically all communication and community building is carried out over the internet
This model seems to have worked very well -- Linux is a respected (and yet free) operating system of very high quality, used around the world, including by large, traditional, for-profit corporations. The reasons for Linux's success was well-analyzed in an essay by Eric Raymond called The Cathedral and the Bazaar. Linux -- and in general, Open Source projects -- was built using an organizational structure far removed from the traditional, 'Second Wave' structure one might find even in an ostensibly 'Third Wave' corporation such as Microsoft; despite being a modern, information-based corporation, Microsoft is structured like an organizational founded on 19th century ideas. That Linux succeeded suggests that the time may be ripe for many organizations -- especially those that have embraced the Third Wave (at least in their products and services)-- to also embrace Third Wave organizational structures and processes of the kind that the internet affords.

The cooperative and collaborative nature of internet communities and the absence of traditional hierarchical structures and bureacratic processes makes many who ought to know better view such communities as some new kind of Socialist Order. I'm referring to the folks at Wired magazine -- a source of information that I find thought prov0king, but which like many in the popular media, can get lost in the hype. It's a good read, peppered with interesting ideas as well as clouded perspectives, which makes it at once aggravating and interesting.

Leaning on a community for support and cooperation is not the same as communism or socialism. Much of this sort of rhetoric tends to come from the United States where fighting communism and socialism was elevated to the status of a State Religion in the early part of the 20th century. People organizing for any purpose, including to face large, powerful corporations was conflated with communism. Indeed, the myth of the Lone Cowboy accomplishing superhuman feats single handedly was likely created to counter any attempts by people to work together. But working together for a shared purpose -- even while retaining individual operative freedom -- has been a hallmark of human behavior since time immemorial. Communism and socialism, on the other hand, are well-defined ideologies for running entire nations and economies through state control. The State is not equal to The Community. So-called communist nations like China and the former USSR pretended to be community-oriented but were (and are) in fact dictatorships of the proletariat (and not even the entire proletariat, but just a very few chosen ones). Communism and Socialism (much like Free Market Capitalism) were frauds perpetrated on a largely unschooled people. Some good education can have the salutory effect of putting pernicious and flawed ideologies to rest forever.

Collaborating and cooperating for the common good has been a feature of human society all throughout human (and likely even pre-human) history. Collaborating as a community is not the same as having an entity called The State enforce collectivism on a frequently unwilling populace. If working together for the common good merits being called an 'ism' like Communism, then the act of breathing likely should be filed under 'Breathingism' and eating, under 'Eatingism'. What the internet has made possible is to extend the context of cooperation and collaboration well beyond the Synchronous and Colocated. Another key distinction between internet community and communism is the voluntary nature of collaborative work -- the State is nowhere in the picture. Indeed, members of internet communities tend to harbor a strongly libertarian, often anarchist, ethos and would quit any community that comes under the aegis of a State-like entity. I don't see how Kevin Kelly, the author of the aforementioned Wired article could use the terms communism and socialism in reference to what's happening on the internet when he writes:
The type of communism with which Gates hoped to tar the creators of Linux was born in an era of enforced borders, centralized communications, and top-heavy industrial processes. Those constraints gave rise to a type of collective ownership that replaced the brilliant chaos of a free market with scientific five-year plans devised by an all-powerful politburo. This political operating system failed, to put it mildly. However, unlike those older strains of red-flag socialism, the new socialism runs over a borderless Internet, through a tightly integrated global economy. It is designed to heighten individual autonomy and thwart centralization. It is decentralization extreme.
Then he writes:
I recognize that the word socialism is bound to make many readers twitch. It carries tremendous cultural baggage, as do the related terms communal, communitarian, and collective. I use socialism because technically it is the best word to indicate a range of technologies that rely for their power on social interactions.
No, Kevin, it ain't the best word. Not even close. Then he gets worse:
When masses of people who own the means of production work toward a common goal and share their products in common, when they contribute labor without wages and enjoy the fruits free of charge, it's not unreasonable to call that socialism.
Oh no, Kevin! It isn't! If it is, then practically every 'social' organism -- including honey bees, for instance -- practices socialism, and has been doing so since the origin of its species. I seriously doubt that any honey bees got to read Karl Marx and managed to convert the flock the One True Way. Finally, he gets it right:
But there is one way in which socialism is the wrong word for what is happening: It is not an ideology. It demands no rigid creed. Rather, it is a spectrum of attitudes, techniques, and tools that promote collaboration, sharing, aggregation, coordination, ad hocracy, and a host of other newly enabled types of social cooperation.
In fact, this is exactly why the New Internet Order cannot be called socialism at all. The so-called social media that ride the rails of the Web help restore to human beings, the power to be human and do what comes naturally to human beings: cooperating and collaborating even while not giving up their individuality. What seems like socialism to some American minds is the consequence of decades of propaganda (through movies, books, and the media in general) that has portrayed the Ideal American as a Lone Ranger, a cowboy or Superman who single-handedly fights and defeats bad guys; such mythical portrayals have had the effect of making generations of Americans believe that there is something dirty about working together for the common good. Perhaps the propaganda was meant to dissuade Americans from participating in labor unions and engaging in collective bargaining - and thereby stem the growing tide that might have led to a communist takeover in that country a century ago. But the propaganda effort (if any) seems to have gone way overboard, warping the views of even otherwise sensible writers like Kevin Kelly.

It is dangerous and seriously misleading to compare the new phenomenon of internet community with either communism or its milder cousin, socialism, for such pernicious associations are likely to keep many otherwise talented contributors from participating in processes that they would otherwise enjoy and support. What the web provides is a theoretically infinite variety of configurations to organize and structure human interactions, create and maintain community and accomplish a wide variety of common goals beneficial not only to members, but also the larger community outside. Wikipedia is just one outstanding example of the phenomenon called crowdsourcing where questions or needs are shot off into the ether, as it were, and somebody (or a collection of somebodys), somewhere in the ether constructs an appropriate response, far better than something that could have been accomplished by the members of a single, conventional, brick-and-mortar organization. Crowdsourcing works because a planet of six billion persons is seething with talent, expertise and wisdom that cannot be captured within the boundaries of any organization, however large it might be. Crowdsourcing relies on the power of anarchy, which in itself is a challenge to any static organizational structure.

Craigslist, a 33-person outfit with global operations is a household name among the young and inter-connected, an indispensable resource that has notched up revenues of USD 100 million. Craigslist has grown entirely through word of mouth -- or net. A low-tech site founded in 1995 by Craig Newmark, Craigslist is a community resource which turns a decent profit even while most of its users pay nothing to use its services. Newmark has turned away many offers to sell out, although eBay has a minority interest in the company now. From the Wikipedia article:
Having observed people helping one another in friendly, social and trusting communal ways on the Internet, the WELL, and Usenet, and feeling isolated as a relative newcomer to San Francisco, Craigslist founder Craig Newmark decided to create something similar for local events.
The thing about web commuities is that they are beyond ideology -- this is neither socialism nor capitalism in action, it is pure human social organization driven by our basic instincts and mediated by technology. Those who seek to view such processes through the lenses of ideology invariably generate warped opinions of what they see. Social media are not the New Socialism -- they are, in fact, tools to deter or defeat States and powers of whatever complexion. Camelia Entekhabifard, an Iranian emigre in the US who fled the Tyrannical Ayatollah regimes, writes in the NYT:
Thanks to YouTube, Facebook and blogs, it’s easier for young people to organize, express their grievances and learn personal information about top officials.
This is not socialism: this is allowing people to live like people, unconstrained by ideology or politics. In a highly mobile society where individuals and families may move several times during their lifetimes, frequently resulting in family members and close friends becoming widely dispersed, social media help restore and reconstruct communal links. The salutary effects go beyond just that:
“One of the greatest challenges or losses that we face as older adults, frankly, is not about our health, but it’s actually about our social network deteriorating on us, because our friends get sick, our spouse passes away, friends pass away, or we move,” said Joseph F. Coughlin, director of the AgeLab at the Massachusetts Institute of Technology.
...
Some research suggests that loneliness can hasten dementia, and Dr. Nicholas A. Christakis, an internist and social scientist at Harvard, says he is considering research on whether online social connections can help delay dementia, as traditional ones have been found to do in some studies.
Apart from Facebook and Myspace, there are now social networking sites exclusively for the elderly, like Eons.com. About the only thing that the terms socialising and socialism have in common are their first eight letters and the fact that they have to do with people. The New Media and the New Internet Social Order are about giving people options for building and participating in communities in a wide variety of configurations for a whole host of purposes, all of which restore to them their sense of dignity as human beings, not faceless serfs or subjects of some impersonal state or corporation. Social media help amplify their sense of self that often gets lost in an increasingly noisy, crowded, depersonalizing and dehumanizing world. It may seem paradoxical that the web, which is itself is viewed by some as being excessively noisy may actually provide a way for many to poke a hole in that wall of noise and reach out to kindred souls dispersed across the ether. Internet communities and social media provide channels that are insulated from each other to cut down the noise and even be as quiet as one would wish them to be by permitting one to exercise control over the extent one's participation.

Above all social media provide the means to explore and experiment with social innovation --new ways to communicate, to interact, to collaborate, and new forms of community. None of this reeks of socialism to me; I wouldn't go anywhere near if they did. I use social media because they enable me to participate in communities that I never otherwise could have. Social media bring the citizens of the world a lot closer together and remind us that we are all highly dependent on each other and that together we can solve our collective problems and perhaps make life on earth a lot more pleasant and fulfilling.

UPDATE June 16, 2009: Blogs, Facebook and Twitter are helping Iranian protestors supporting opposition candidate Mir Hossein Moussavi communicate, collborate and coordinate their activities, while protesting the fraudulent election that recently returned President Mahmoud Ahmedinejad to power. It's notable that the New Media has left the traditional news media way behind. The government is respoding by trying to shut down communication networks - thereby announcing its sense of insecurity, like any autocratic regime. This may not have the desired effect, however.
Jonathan Zittrain, a professor at Harvard Law School who is an expert on the Internet, said that Twitter was particularly resilient to censorship because it had so many ways for its posts to originate — from a phone, a Web browser or specialized applications — and so many outlets for those posts to appear.

As each new home for this material becomes a new target for censorship, he said, a repressive system faces a game of whack-a-mole in blocking Internet address after Internet address carrying the subversive material.
And ironically,
“It is easy for Twitter feeds to be echoed everywhere else in the world,” Mr. Zittrain said. “The qualities that make Twitter seem inane and half-baked are what make it so powerful.”
Viva la half-bakery!