Showing posts with label Future. Show all posts
Showing posts with label Future. Show all posts

Sunday, January 17, 2010

How can you turn that into an application?

While going through a recent edition of Businessweek, I came across a story on how publishing houses are trying to put up a front to counter the rising dominance of Amazon (Trying to avert a Digital Horror Story – Businessweek, January 11)

There is a mention of how people are interested in paying for a variation on the standard book viz. a single chapter or a searchable database. As a result some of the publishers are now considering bringing out iPhone applications for some of their books.

This is very fascinating. First books transitioned into their paper-free avatar –the eBooks and now they are going one step further – into applications.


There is a mention of a book called “What to drink with what you eat” which the publishers are now trying to turn into an app that is like a ‘virtual sommelier cum food critique’, featuring food and wine pairings and tutorials and flavour balancing.

This signifies a shift in the way publishing houses think and has impact on the way information would be packaged in the future. The example of a book turning into an application shows how knowledge is being turned into applied knowledge. It seems that just the way there has always been a market for knowledge from books, the market for the application of knowledge from that book will become bigger and bigger in the future. Being able to use the things that we read in a book - when we are in a meeting, or having lunch or when we are traveling, offers a big opportunity for books to expand their relevance and impact.


The application mind set’, as I call it, can potentially turn almost every idea, every bit of information into a byte-sized tool that is always on tap. Thanks to effective miniaturization of technology - the future of knowledge and information will go more and more down the application path.

‘The application mind set’, as I call it, can potentially turn almost every idea, every bit of information into a byte-sized tool that is always on tap. Thanks to effective miniaturization of technology - the future of knowledge and information will go more and more down the application path.

“What can be the iPhone application for this idea?” is a question, that can help us unlock the potential of any good idea that crosses our mind in the future.

Monday, December 14, 2009

Is Netbook becoming a Notebook? - Part 2

IDC forecasts that by the end of 2009 almost 20 million Netbooks would have been sold. This represents almost 12% of the entire Notebook market. To me this already is a sizable chunk of the mobile computing market.
Now step back and imagine what if Netbooks had entered the mobile computing space before the Notebooks did. Would we still prefer the Notebook to the Netbook? Think.

To me the real potential of the Netbook is as big as the Notebook market is today and the bottom end of the PC market. This is the real number of people who need Netbooks.



The question is how can we reach Netbooks to them faster?
Somehow companies seem to be in two minds about promoting their Netbooks. Perhaps because the more they promote these, the more competition they create for their own pricier and more loaded Notebooks.

But isn’t this cannibalization inevitable? After all sooner or later majority of consumers are going to realize that a Netbook can address their needs.

This situation somehow reminds me of the Film Roll Camera Vs. the Digital Camera conundrum 8-10 years back. Only thing that makes Netbooks poised for a still faster adoption is their lower price. (Digital cameras were very pricey at the time of their launch.)

If we think about the needs of the three potential user groups that we talked about earlier – it will become evident that there is enough room to market different kinds of Netbooks between these three groups. The frequent traveler is looking for a professional Netbook, the young girls is looking for a cute Netbook and the students just want their first digital workbook in the shape of a Netbook.

There is a huge market for these light & smart devices and we as marketers and manufacturers need to allow this market its rightful place be it the airport lounge for a frequent traveler, a studio apartment of a young girl, or a classroom anywhere in the world.

Is Netbook becoming a Notebook? - Part 1

(and Notebook is becoming a Desktop?)
Recently my Mandarin teacher introduced me to her newly acquired Lenovo ideapad – a Netboook. It is a small, sexy, simple and solidly built thing. Interestingly enough, my teacher already has a Notebook but she found it too heavy to lug around and thus the Netbook.
Did she compromise a lot for this portability? Not really – except for the fact that the screen is just about 12” & there is no DVD drive – there is virtually no perceptible difference in the performance. Even the smaller keyboard was not limiting in any perceptible way. Given that my teacher is not a graphic designer or a hardcore gamer, it is no surprise that Netbook is just what works for her.
I asked her about how often she used the Notebook, now that she had an alternative and as it turns out, her Notebook has now become her home PC.

To me, this kind of user is not an exception. In fact I would go on to say that majority of laptop users today, actually use their Notebooks to do similar things. Most of the time they are checking/sending emails, doing IM chats, creating and saving MS Office files, surfing the Internet, playing some flash games or saving an interesting video downloaded from some website or received in a mail from a friend. Come to think of it, there is not much other than this.



This usage behaviour is not new – what is relatively new is having the option of buying something like a Netbook. But there was and, in many situations still is, a perceptual block towards Netbooks viz. the fear of buying a slow performing computer that jeopardized everyday work, the doubt that the keyboard is too small to allow ease of usage etc. Increased adoption will potentially change all this perception. The more people see what a Netbook is capable of, the less they will doubt.

In this adoption cycle - the most potential group of users and most likely the potential blocks that could prevent them from choosing the Netbook could be perhaps be summarized thus:

1. Group 1: ‘Netbook is the new Notebook!’
Frequently traveling ‘non-organization man or woman’ who works for him (he might have a laptop and/or desktop as his home of SOHO computer)

Potential Block 1: “Does it look professional enough for that meeting?”
Potential Block 2: “Does it work seamlessly like laptops?”

2. Group 2: ‘Netbook is the way a computer should be’
A large group of young female users, who prefer ‘cute-portability’ to ‘pricier specs’. They choose the clear benefit of simplicity over the hard to understand definition of performance

Potential Block: “But I have not seen any of my friends using it. My boyfriend never suggested me I buy that. Is it a good choice”?

And now the largest and the most under-utilized user group
3. Group 3: ‘Netbook is a Workbook’
Students comprise the largest group who should be using Netbooks. Visualize the Netbook case as the new school bag.
Student applications are perhaps best suited for the adoption of Netbooks. No wonder this is the group that inspired the creation of Netbook first.

Potential Block: “My school tells me I can only buy XYZ brand of Notebook, I can get this at a special price”



Together these three segments can be really big. In fact the third segment by itself is a huge under utilized opportunity.

This post will be continued.

Thursday, July 30, 2009

Divergence 2


The scenario of Divergence that I talked about in the last post is probable, however its probability hinges on some basic consumer factors. There could be more but I would start by outlining three:

1. What percentage of the total users would like to have separate portable devices for functionalities like taking pictures, navigation, music, games etc?
2. Within this group, what percentage of people would like to upload, download pictures instantly after capture, or play music and games online and on the go?
3. And finally how many people from the second group would be prepared to pay for wireless connectivity services that enable these activities?

Conventional logic would perhaps make us believe that such consumers might be very few. However I believe that adoption of such new gadgets and services might not follow a linear course as we saw above. A lot hinges on the form and experience factor of these new gadgets and these wireless services. (Future surprises us all the time – it was difficult to imagine that people would always want to carry all the music that they liked. But as we saw with the success Apple iPod – they did, and even paid a premium for it!)

The other set of questions that these gadgets and services raise are to do with operational factors like industry standards and interface. What would be the application software in these devices so as to be able to interface with the Internet? Which operating system would they use?

I remember reading Everyware, by Adam Greenfield, last year. Greenfield offers a sneak peak into the questions that ubiquitous computing (ubicomp) would raise in the future. The book looks at the ecosystem of the wirelessly interconnected devices of the future. Greenfield goes on to talk about the importance of finding common standard for all these things to be able to interface effectively. AT&T’s pursuit that I discussed in the last post would l make this need even more immediate.

What might be exciting to visualize is that if this is going to happen in the future, what would happen then. In other words what could be the social impact of these technology developments and how could it potentially impact our behaviour in the time to come?
Questions from our future are heading our way fast!

Monday, July 27, 2009

Divergence 1


(It has been a long break (again). I guess too much of work at work is not the best thing for a Blog. Hope to be more regular from here on..)

Digital camera, game consoles, e books readers, GPS devices etc, are all familiar gadgets of everyday use.
Now imagine all these devices having the ability to hook up to the Internet on their own.
• I mean the camera does not need to wait to be near a computer to transfer pictures nor does it need to be attached to a mobile phone to share
• The mobile game console can surf the net in wide area - wirelessly
• The GPS device does not need to be attached to a mobile phone to get its quadrants right
• The specialized e book reader (largely like the Kindle of today) hooks up to the Internet on its own, wirelessly.

If AT&T were to have its way – we could soon see a mobile service provider selling subsidized cameras, GPS devices, E-Book Readers, hand held video game devices and more (BusinessWeek July 2009). The motivation for a company like AT&T is quite simple more the number of wireless connections, greater the usage and thus higher the revenues.

So what’s the big deal? Two I think:

1. “‘Master Gadget’ Never Really Came (the wait is forever?)”
This indicates that the device approach to the definition of convergence (one device does it all) is not coming out to be true. Somewhere corporations have realized, that it is not worthwhile to wait for that ‘one device’ that people would start using as their master gadget (i.e. make calls, surf web, take pictures, read books, listen to music, get directions, play games, etc etc)
Why has it not come? Well, it is debatable.
Corporations would prefer to say that majority of consumers are not ready to switch to and pay for that one device.
Consumers would say – there is no worthy device that can deliver satisfactorily on even half of the desired functions well.

I guess iPhone or iPod Touch, are examples of devices that have managed to come close to the desired multi-functionality

2. “The real war for standards (operating system etc) is perhaps about to begin”
Which OS version and Internet browser would these many different devices (made by manufacturers spread across the world from Tokyo and Seoul to Shenzhen in China, use? Internet Explorer, Firefox, Google Chrome, Safari or Opera? Netbooks going the Android or UNIX way or Nokia building Symbian as the Mobile OS standard etc are not the real news. The real news would perhaps be who among Microsoft, Nokia, Google etc ends up holding the majority of the Operating system and Application software sky.

Monday, April 13, 2009

So where was I going? What was I doing?


Does it occur to us that every time we log on to the Internet – more often than not -we end up spending more time than we originally planned? More so at home, given that the proverbial ‘cyber cafĂ© clock’ is not there to haunt us!
Over-running our planned Internet time is linked with aimless clicking from one page to another. WILF or ‘What Was I Looking For’ is the expression used to describe this phenomenon. WILF happens mainly due to the hyperlink-to-hyperlink clicking that we engage in. Reading about something, clicking on a link that has more to tell about the same and so on - the chain can be endless.
Once we are on to the chain we can easily lose track not just of time but also what is that we were originally looking for – thus ‘what was I looking for..” This chain starts with the random surfing that begins during the time that the browser window is loading the web-mail page, or the time between clicking on an attachment icon and the opening of the file and many more similar in-between moments. These are moments in which we engage in checking out the other stuff’! As a result after about ‘2 hrs of what we would describe as ‘checking email’ and ‘surfing’ –what we have actually done is just about read 4 mails, deleted 15 spam mails and a lot of ‘random clicking.’ Many young college-goers actually suffer from such net addiction.

However this post is not about WILF. This post is about the potential impact of location aware mobile devices, on our movement in the everyday physical space. This post is about the possibilities that would emerge when our location aware mobile devices would start interacting with user generated soft maps (soft map = city map layered with information about personal preferences viz. my favourite pub, the quietest street, the best pizza, ‘my crush lives here’, get your camera & click the sunset from this point, the best park for the morning jog etc.)

When people would walk around while being constantly told, by the location aware mobile device sitting in their pocket, about the best that they could do in the place that they were in – wouldn’t they be prompted in a way that is similar to the way an interesting hyperlink prompts people on a web page?

Of course a lot of this prompting can be switched off – perhaps almost in the same way that we block unwanted pop ups on websites. However location prompts could be harder to resist for they would not just be linked with our physical location (and thus much more relevant) but also sensitive to our preferences that our mobile device would be much more aware of.
For want of a better example – Amazon’s customized home page is the closest web equivalent of an irresistible prompt of the location sensitive future.

I am curious to know the social impact of this consumer technology that is headed our way – soon!

Imagine young people wandering around the city, moving around district while staring into their mobile devices just to get to that place that their friend has tagged as ‘the place’ for the best local street snack, only to get distracted after a while by another tag that points at a spot which is best place to click the city from an elevation and then discovering a bargain on hand painted T-Shirts – that is two blocks away.. and then running into a GPS driven treasure hunt game organized by a bunch of local skateboarders who are looking for a partner..and at after a few hours of all this the person is left wondering where was I going, what was I doing?

Thursday, September 11, 2008

Allen Adamson of Landor Associates on the Future of Branding in the Information Age



Recently I asked Allen about what he saw as the future of branding in a society where almost all the information would be easily findable and the concept of “myth” fades away?
His response is focused, here is what he believes -
Every smart organization knows that the “myth” of a brand fades as soon as a consumer realizes that the promise of the brand is not being delivered as expected. Easily findable information merely speeds up the process by which brands are outed as frauds. Strong, successful brands are not built on myths, but on clearly communicating and demonstrating what makes them relevantly different - and better - than the competition. Contrary to your premise, the more information consumers have access to, the more important branding becomes. Brands are short cuts. They help consumers make personally meaningful choices. In a world of information overload, consumers are looking for ways to simplify and speed up the process of deciding which brand is better suited to their needs. If you’re in the market for a digital camera, for example, you now have access to tons of information; which cameras have the highest mega-pixel rating, the largest zoom, the most memory, the longest battery life, the fastest downloading speed. If a brand organization is doing its job well, the branding, no matter what form or format it takes, will help you determine which brand is best for your needs. “I’m going to buy a Nikon because the company designs products for people who are really into their pictures, or I’m going to buy a Sony because it will better integrate with my laptop.”The functionality and service components of the camera, which are also considered important aspects of the branding, will validate your choice. While some brands may start out mythic in nature, myth has never sustained a brand long term. A strong brand is based on a simple, well-defined promise of relevant differentiation and a history of delivering on this promise as expected.
For those who are provoked or thought provoked, they can comment here and they might find more to mull over and debate in his new book Brand Digital:Simple Ways Top Brands Succeed in the Digital World.

Saturday, August 16, 2008

I can hear you thinking

I have always been a believer in this and I always will be. Best interface is the one that can read minds rather than wait for my hands to do something or my vocal cords to make some noise!
I wrote about this a few months back when I was passionately attacking ‘classical languages’ like English and others that we have grown up speaking and writing. I as I said then and continue to say now, these languages were not developed keeping computing in mind (read earlier post “Undocumented Irrelevance of the Written Word”).

They we designed for ‘classical social interface’. However as we interface with ever more powerful computers, ‘as our needs extend beyond classical social interfacing’ we need an interface language that that can match up to the task. So when something like Epoch (from Emotiv Systems, see more here http://emotiv.com/INDS_2/inds_2_1.html) comes over the horizon, I see hope.

I have not tried Epoch yet, from what I read and hear about it, it brings us a fair picture of the things to come and at USD 300/- (around RMB 2100/-) it is worth trying out to get a feel of how future interfaces might feel.
Epoch is fundamentally a mind reading headset that helps you interface with your computer (it uses the same technology as in EEG – electroencephalography but without the gel!). In other words if it works as I imagine, in the time to come, it should be able to replace your keyboard and mouse.
Though right now it seems to be more of an entertainment tool because the primary applications is immersive gaming but I feel it can become much bigger than this. It could be a dramatic shift in human computer interface. The iPhone technology ended the need to press a physical buttons because the screen became the button; the Wii technology ended extended human motion to real time onscreen responses, and Epoc promises to completely rid us from getting physical with our machines!
I might sound like a technology fanatic but if we were to think objectively, in the past we have been very unfair to the way computing works. The computing world is the world of bits and bytes, which of course is much more efficient than the human or physical world of atoms and molecules. However, just because we live in the less efficient world, we have always brought down the interface to the level of inefficiency that typifies the physical world. But with neuro-impulse interpretation, we stand to break free from the fundamental limitations of the molecular side of us being humans. To me, human neuro impulses come closest to the bits & bytes efficiency and accuracy paradigm of the digital world. Neuro impulse is like the digital side of our biological or molecular existence. Thus if we can set up an interface with our computing devices at the neuro impulse level, then we would be interacting with these devices in the most efficient way that biological evolution can permit us to, today.

Another key reason for me to stand by the neuro impulse future is that it promises to democratize technology in an unprecedented fashion. As I wrote in my earlier piece, people would not need to be ‘language-wise’ before they could start using the benefits of the new technologies.

It is worth thinking at what level would we be picking up neuro impulse – would it be just an impulse or would we need to wait for the impulse to be given a word form before it can be interpreted. Right now I guess we are at the latter, however I keenly look forward to language independent neuro impulse recognition for superior interface with computing and more importantly a better life!

Monday, December 24, 2007

Undocumented Irrelevance of the Written Word


Imagine – “Would we still be ‘writing’, had we invented a sound recording device before we developed the alphabet?”

Think.


I suppose the primary objective of the written word was, and still is, ‘to Record’. But then, isn’t it surprising that we still learn to read and write, when we can ‘record’ in many other, and much more efficient, ways?

We invented the written alphabet before the audio (and then a video) recording device. Had it not happened in this order, what we would have had today as language, would have been aural sounds & symbols as denotations. We would have been communicating and recording information by using sounds and symbols in place of writing letters and words.
We can argue, however, that characters or words are also symbols of a certain kind. No matter how correct this may sound, it actually is not. The fundamental difference between symbols and characters is that characters and words are a representation of meaning that may or may not resemble ‘that’ what it is representing. As Christine Kenneally writes in ‘The First Word’, a word is an arbitrary association between sound and meaning. There is nothing in the sound of a word that tells us what it means or what it does. So the word “Apple” does not look like an apple, does it?

May be it would be a good idea to try and step back in time and ask ourselves - out of all the, equally potent, senses that we as humans possess viz. the sense of touch, smell, sight, taste, & hearing, why did we consign the written word to the sense of sight?
The reason once again, perhaps, was that it was the best possible way of recording things for others to be able to see in the absence of the person who recorded it.

But this choice, to depend on verbal and visual mode of communication through written words, perhaps came at a huge price, a price that we as a species have paid in the past and are still paying, without actively realizing it.
The price is being paid in the shape of poorer development of our other senses for the purpose of capturing information, understanding information and acting upon it. What follows might not be the best example but I would still like to cite it in order to illustrate the point I am trying to make. I was amazed by how animals were warned much in advance of the disaster caused by Tsunami that struck many Asian countries a few years back. We as humans, with our ‘sophisticated’ early warning systems (based on written words and symbols!) were caught almost unaware, many of the other species were observed to be more prepared without any technology to back them.
Has our reliance on languages, especially the written word slowed down the development and evolution of our ability to sense things?
From the above example it does seem to have had a negative impact. It’s impact is such that many of the ‘lesser species’, as we hitherto believed them to be, are seemingly ahead of us in sensing & processing critical information about our environment.

Although I do not have empirical evidence but everyday observation tells us more about another weakness of the written word. Writing perhaps is among the most unnatural things that a child learns as he grows up. To me it seems that we are not biologically coded to write. Just observe how a child learns the spoken language effortlessly; however the maximum punishments that he gets are to do with memorizing & writing the written word. This is in sharp contrast to the seeming effortlessness with which he learns to speak, hear and understand the language.
The need for memorization through the written word slows down the process of learning. We easily forget that memorizing the things we learn, viz. spellings, dates, facts, formulas etc are not really critical. These are just facts and facts can be looked up. What really matters is how things stack up; or fit with each other - the stories, the frameworks etc. These frameworks become the operating system of our thinking, our point of view and basis of our judgment. Imagine how much better we would do for ourselves if we were not forced to memorize with written word as the guiding paradigm.
Even after having learnt to read and write, we spend a large proportion of advanced learning hours and years towards memorizing things. It is only when we leave the college and schools behind; when we start our lives at work that we really begin to unlearn that what we have remembered and start using the associative faculties of our brain viz. how things and concepts are interrelated, and thus what is the bigger picture.

The human brain has marvelous associative processing capabilities, however I believe that written languages, sub optimize its true potential. Could it be that we would have been a better thinking race had we developed a more natural approach to learning a language an approach that could have complimented our almost limitless ability to think?

To me, among others, one of the most important roles for language is communication. And in my understanding, communication is a subset of sensing. It is a way of being in touch with the world around us. However the very act of developing the written word seems to have discounted the importance of sensing.
In our desire to standardize and simplify things we have lost a lot of, those things that could not be written or read. This reality assumes ironical significance when we acknowledge that “large and perhaps the most critical part of all communication, is nonverbal.” Despite acknowledging the importance of ‘sensing’ that what is not being said or written, we are still stuck with the ‘written model of language’.

Having shared my understanding about the importance of sensing over listening and reading, I would now like to share with you a related observation, it is about everyday challenges that some of us experience but never really get to think about actively. Worst still, this challenge is often lost in the rough and tumble of our everyday work.
I am talking about the pace of our thinking. Doesn’t it occur to us sometimes that our speed of writing with a pen or pencil or using the keyboard is way behind the pace of our thoughts? Don’t we sometimes get frustrated when we lose thoughts and ideas just because we could not ‘document them’ when they occurred, only because we did not have the time to ‘put them down’ on a piece of paper, or a PC or our PDA.

Given these constraints of the written word, let us now look where we are in terms of progress that can potentially help us free human ability from the grips of the written model of language

1. Today we have both audio (and video) recording devices
2. However we still have large scads of ‘illiterate’ populations in different parts of the world
3. At the same time we also have people who are thinking fast enough to find the written word cumbersome and sometimes frustratingly redundant


Why then are we still unable to rid ourselves from the grip of the ‘written word’ paradigm?

1. The illiterate need to be literate first, before they can use technology, which of course, is predominantly language driven

2. Even those who are at the leading edge of technology adoption and are faced with impact of language as a pace retardant, still need to use the classical languages (English or any of the remaining 227 languages as options)



I see tremendous opportunity to enhance the role of technology as an enabler for people both at the bottom and the top end of the human development index.

On one end are number of technology-dark, illiterate communities across the globe. They can in fact reap the benefits of mobile personal technologies if they are given a chance to interface with these technologies, through a new standard of symbolic interface. This new standard would free personal mobile devices, among other, from the shackles of classical language(s) that hinder the ease of adoption.
This would help a large section of the third world to leapfrog into a technology integrated world that can enable, empower and consequently elevate the quality of their lives.

We need to ask ourselves what is easier and faster–
Waiting for the whole world to be literate before they can start using personal technologies that enable and empower and finally uplift quality of human life?
Or
Develop a new standard that helps these people bypass the digital divide, and make the human race more progressive?

On the other end of the human development index we have another set of technology users who can have the pace of their thinking unshackled from the tardy classical languages that constitute the standard interface for most of the personal computing technologies.

Computers and human brains process much faster than the speed at which we write or use the keyboard. This high speed processing can be utilized best by shifting to a more efficient language interface that is suited for high speed cerebral and microprocessor functioning.

Today we can only think of an interface driven by aural prompts.
We have seen some of these software applications being bundled & marketed as ‘added features’ in the personal entertainment and communication devices. Most of this voice recognition software has not proved to be robust or cost effective enough to attract large-scale adoption. But speech is once again only incrementally better than the written text.

The ideal interface that can do justice to the high speed processing is neural prompting. Perhaps what can be called interface at the speed of thought.
With the technological developments happening in area of understanding how exactly our brain functions (deconstructing regions in brain that are responsible for specific tasks), we might not be very far from actually mapping out the key functional areas of the brain which in turn would help us interface with the digital devices around us and do that much more efficiently.

Though this may be way too far in the future but as I see it, speech and text is not the destiny of language. The languages of future could well be neuro-impulse traveling effortlessly to and from digital processors around us and the neuro-biological cerebral processing inside us.

I am not building this case to garner support & throw the written word out of the window. However I believe that this is an honest attempt to try and sensitize many of us about the inherent weaknesses of the written word and the imperative to develop something better.

Today the written word has become a convention that nobody questions. Little do we realize that in a future integrated with personal technologies, the cumbersome nature of classical languages would retard the growth of mankind from becoming a more efficient race.

The written word or ‘Classical Languages’ as I prefer to call them, are a model from the past. -a past when ‘documenting’ was the only way to record. But as we move into a technologically enabled future, we would need to develop new languages that deliver greater efficiency and versatility - a new standard of interface that can do better justice to users at the two ends of the development spectrum.

In future language might have a bigger role to play than just ‘to record’. In the future language might as well be helping us create ‘new records’ as we reach new frontiers of human development!

Are we ready, already?


Saurabh Sharma

Monday, September 24, 2007

wiki life

RapLeaf, Spock and Wink.com are thee examples of how online content is close to near complete democratization. Profile that you create here, pictures you put here can all be modified/ ‘corrected’ by visitors. Anyone can add photos or a short description to any profile. Then Spock members vote on what remains there and what goes away.
Basically people who know you or ‘think’ they know you can say anything about you and write it too. You are not what you think you are or want people to think you are instead you are what people around you think of you. It is almost as if gossip, raves, rants are no more behind the back.. It’s all out there and the last thing you want to do is lie. (A lot of Web 2.0 anyways is all about getting real, just like real life isn’t it..) As RapLeaf puts it - It is more profitable to be ethical.

Other than this ‘profound awakening’ about not lying on the net there is something for brands and marketers here. I believe it is a great feedback opportunity – just put your brand or ideas here (and if you are known or stand for something, people will come with their candid raves and rants..) but prepared for everything that follows. Just like a matured man or woman your brand must also learn to accept the truth no matter how tough it might be.
To me this is a great tool to see where we stand and where we fall as people, as marketers, as friends.

Wednesday, June 20, 2007

Cameraman, now reporting!


Deutche Telekom has unveiled the ultimate web reporter in the recently held CeBT Tech Fair in Germany. Web Reporter Lisa Molitor had a camera mounted on the left of her headset and there is a screen that covers her right eye. The screen displays that which the camera is capturing in real time. Lisa can also transfer the images and video she is capturing immediately using the wireless LAN to the company’s website.

Cut to field news reporters of today. The sign offs are usually “this is XYZ with Cameraman PQR at 123 for MNO news”. This is because news reporting (journalism) is a skill stream seen separate from camera work. But with innovations like the one Lisa Molitor was displaying, all this is bound to change.
The future camera person and the news reporter/correspondent are going to be one. News reporting would not be complete without equal dexterity in handling wearable or embedded technology like the one we just saw. It is about time reporters went tech and camera person became more content savvy.

In many way skills without scalability for technology augmentation would be incomplete skills. Technology would be enhancing every skills & trait. Individuals who acknowledge this impact of technology’s delivery enhancing potential would gain the most.
Some other skills that are most likely to be influenced by the advent of embedded and wearable technology would be Teaching/Training (virtual and real time), Medicine (virtual consulting, E-medication) and many more if not all the other occupations/professions.

Coming back to the example we started with, it is interesting to visualize the impact of these developments on the criteria of selecting future news journalists.
Would it be better looks or speed with better understanding and application of technologies?

Thursday, March 22, 2007

Self-help Tsunami


When I got to know that a westerner was surprised to see the titles in an Indian bookstore I was curious. He was surprised by store’s extensive focus on business and non-fiction books. This is in sharp contrast to what a typical western bookstore is selling.
If Crossword is a good sample then I must confess that his observation was precise. The Indian book reader seems to be on a self-help & non-fiction spree.
A quick look at the sections and the titles being promoted reveals that we are reading a whole lot of business books. Be it finance, marketing, HR or operations; books on HOW TO sell, crack that job interview, grow faster, get that corner office, SPEAK ENGLISH and a lot more, the list is endless.

While most of the ‘non-textbooks’ have traditionally been bought for leisure reading, we have a new crop of readers who started reading books (other than text books) much later in life, They have not been brought up on the Famous Five, Hardy Boys, Enid Blyton etc as kids and have not read Ayn Rand, Sydney Sheldon etc. as youth or young adults. They started much later (late 20s or early 30s) and their decision to read books was a result of realization that if they have to get ahead in their profession/occupation – they need to know more than what their qualification (Engineer or CA or anything else) taught them. Thus began the journey of adopting a new habit – reading books that help.
You’d find these 20/30 something in suburban trains/buses, immersed in Shiv Khera’s You can win or Covey’s 7 habits or The 8th habit, while clutching the overhead handles as the laptop bag hangs from their shoulder over the crumpled blue shirt. I also see them in the long and winding boarding queues at airport terminals.
These are, as some consumer behaviour books and demo/psychographic surveys call, ‘the aspirers’.

Equally interesting is the attitude of these aspirers towards the reading habits of their children. Instead of focusing only on the textbooks (like they once themselves did) these parents want their children to read fiction. Why? Because they believe that a lot of English early in life, be it in the form of reading, talking, at school, or with a teacher, or interaction with a more well-traveled and proficient uncle/aunt has therapeutic properties. It prepares you for the world outside the classroom and the world beyond Indian shores.

I do not know why but I can’t help thinking about China’s one child policy in this context. Out there it was a Govt. regulation not have more than one child, in India, it seems, parents have self-imposed the ‘Must Learn English Language’ policy on their kids.

In this future full of aspirer households, I am curious about the future of light/leisure reading & regional and the National languages. We just might be moving to a non-Hindi urban India in another 10-15 years where the preferred mode of interpersonal communication becomes English and reading, among aspirers, becomes predominantly non-fiction.
For every trend there could easily be a niche counter trend and I won’t be surprised to see mushrooming of training institutes that promise to groom children or grandchildren of these aspirers in the ‘innocence and originality of our mother tongue’ be it Hindi or our native language.

Also, these hyper-ambitious adults might look at different leisure activities (viz. going out, doing something) in their free time, instead of staying home & reading fiction.

Again there could be a niche counter trend to this that could, well, be about staying home or going to a reading lounge (not library),, where you meet like minded people, discuss and debate ideas, concepts from what you are reading or what you are thinking..

Hyper-speculative I’d say but very likely.

Tuesday, February 20, 2007

No lungs to invest in eco stocks!


In the future there are no lungs! (and heart and kidneys etc. but more about these later).

In the future there are no lungs! (and heart and kidneys etc. but more about these later).

We would not need to breathe air (oxygen) because we’d be getting the Oxygen to our cells (for whom we actually breathe) directly through an oxygen-making source attached/embedded in our bodies.
No breathing needed, is no air needed is no lungs required. I’ll not dwell on this ‘ultimate outcome’ of Genetics-Nanotechnology & Robotics (GNR), but I’d sure like to share what I believe could perhaps be the implications of this.

Ultimately we’d not be requiring our environment, as we love to call and click it, anymore. And environment is air, water, other animals & plants (the latter, again, are important for the oxygen that they make for ‘us’),‘natural sights’ (as for the last one we’d be able to create natural sights virtually and they’d be as good if not better!)
All this could actually happen as early at 2045.

Pardon me if I am making this sound too simplistic but I am just trying to summarize the key resultants of the present day technology explorations in fewest and simplest words possible.

What role does environment have beyond Survival (oxygen) and Aesthetic experience (those poetry inspiring & relaxing sights!)?
Off late it has come to acquire one more important role – Return on Investments. People are investing (these are really big people)in eco-stocks because they see our environment under extreme pressure in the medium term (those melting ice caps are a part of this). The natural outcome of this warming of the planet & melting of the ice due to our present day over-abusive industrial behaviour would be a massive clampdown on environment degrading industries.
That would be the day when eco-happy/friendly companies & industries would have their real value be acknowledged by the fuming chimneys of present day industries.

But as I pointed out earlier, this is only in the medium term (20-years) because in the long run we won’t need non-polluting windmills just as much as we do not want smokestack economies today. Computation will ultimately drive everything that is important. This primary foundation for future technology thus appears to require no energy.We'd be able to harness the information(yes!) stored in 'seemingly' dead objects like rocks and even trees. (More in 'The Singularity is Near')

That, dear reader, would be the day when our existence will surpass our environment.