Thursday, October 30, 2008

Head Lamps to Light up the Streets?

While in upcountry two-wheeler showroom in central China, I was amazed that the average number of headlamps for every bike was more than three! Just have a look at the electric bikes in the pictures here. To me they are almost like light houses on two wheels!


I wondered for a while before I realized that as one would go further up the country side, these head lamps would not make one feel the absence of street lighting.
But it was not too long before I realized my naïveté. Most of these only looked like headlamps; in fact these were plain reflectors that were made to look like ‘fancy’ lamps!

Remember the tame reflectors in bicycles! No one likes their electric motor bikes to look like that. These reflectors look like anything but tame and that’s why they are so popular!
Also, look at the iconography for ‘international and premium associations’ with stickers like Bud & VIP etc.

To me, this is an amazing example of style mimic for something fundamentally functional!
Simplicity that looks cool and all this at not more than USD 200/- simple!

Ps: You do not need a driver’s license to ride this!

‘Kill Think’ with Drop Downs, Search Strings & Radio Buttons

I never really believed that interactions with computing devices could actually dumb us down. In fact even now, I am quite surprised by what I have realized recently and I am still trying to some to terms with it. I am surprised because I always looked at computers and computing devices as great assets for example:

* Computers as a gateway to potential new vistas through the Internet
* As tools to reach out to still smarter beings
* As great store houses of information that usually do not forget things
* Almost as an auxiliary to self, for if I want to share something in real time – I can turn on my computer and take them through it. This is almost like going to a hyperlink in my mind, which traces its path into my computer’s hard drive


However, if we were to step back and look at the entire spectrum of the kinds of human computer interface, we might be able to identify some early warning signs of how we might be doing disservice to our thinking ability by relying on ‘computer assisted way of working’.
To me these signs indicate that one needs to be careful so as not to take the Human Computer Interface protocols as second nature. However the reality is that we easily develop in the subconscious, behaviours that are tuned to work in computer situations. We then even extend this behaviour into ‘non computer situations’.
To me an example of this would be not typing the whole word for the computer would anyway prompt, or not caring too much about the spelling or limiting thinking within the number options given in a multiple choice situation. The corresponding non computer situation might be writing on a piece of paper and being totally unreadable or incomprehensible or becoming uncomfortable in open ended real life situations – where we might not have the luxury of being able to point out a right or wrong answer.
I usually refer to these situations as early examples of demise of thinking & imagination, at the hands of the drop down menu, the radio button and most critically the search bar.


As Jaron Zepel Lanier is quoted in Radical Evolution “In the computer-human loop, human intelligence is the more flexible of the two. Whenever we change a piece of technology, the chances are that at the end we might be changing humans more than the technology itself.” For example when we interact with an airline reservation bot, we get frustrated for the first few times for the bot cannot go beyond a predetermined set of responses but then after some time, we become ‘used to it’. In other words, we have aligned our way of thinking to suit the bot (and not the other way round).

Google’s benign questions viz. “did you mean bureau of Indian standards” (instead of ‘buro of Indian stndrds’, which is what people generally type) might offer a glimpse of our future spelling capabilities, triggered by the fact that that Lanier pointed out viz. “in the computer-human loop, (currently) humans we are more flexible and we adjust ourselves to the ‘level’ of computing.”
While the computer can remember a lot and process a lot (GBs and MHz’s) it still a long way from being called a reasoning device. The irony is that while we are trying to make the computers smarter, we actually might be dumbing ourselves down. We are ending syncing down instead of syncing up with machines!



May be I am being too critical of some of these everyday interface technologies, however to me it seems that because many of these customer service technologies, do not have good AI (Artificial Intelligence,) we end up going down to their level and the real fear is that the more we do that the more we would be able to do only that.


One can argue that these interface elements (viz. drop down, radio buttons etc) are used while engaging in activities that are low brow and that we do not want to invest our cerebral energies in these viz. searching for something, or filling a form or asking a question, booking a airline ticket or checking the account balance status over the phone etc. May be it is true – these really might be low brow jobs after all. However I am curious to know that while we are investing time and cerebral energies more efficiently by not allowing ourselves to think too much about these ‘low brow’ jobs, how exactly are we investing it elsewhere and how sure do we think we are about becoming smarter in this process?

Tuesday, October 28, 2008

Digital hunter gatherers?


We started as hunters who hunted when they were hungry. Then we started gathering the stuff, farming, rearing animals. And then we discovered this whole magical possibility of making stuff beyond fire – so, in came pots and baskets and so many other things.
Most of these things that we made were for ourselves. Then started barter and then finally we started exchanging goods for currency.
All this continued to happen up until that one day when the Gong of Industrialization sounded and was heard far and wide.
In came place of work, specialization on the assembly line, working shifts, time cards, punch cards, swipe cards and time sheets (we still do many of these regularly) and, many others which were aimed to triggering accountability. To give an answer to the question “What time did you come in today?” (“Why?”) “What did you do today?” (“Why?”).
While we were busy putting hours against people and appropriating money accordingly – we also started carrying devices of ‘superlative productivity’ viz. the Lap Top and the Pocket PC (or Smart phone). Now the work started roaming with us, at home, in the bus, on a plane – and many times (unfortunately) even while driving.
A steady movement from “home = work” to home and office as two separate entities and now finally office everywhere.
As we carry our work everywhere, as the knowledge worker makes physical space increasingly irrelevant, I am left wondering - how long we would continue to entertain the legacy of time keeping. With lines blurring all around us – it is increasingly theoretical to compartmentalize time for work and time for other stuff.

On a slightly philosophical note - the hunter gatherer era made us work any time, anywhere and even overtime - just to keep our bellies full, ironically the digital hunter gatherers seem to do just that, as they make or take that call or send/receive that mail just after lunch and dinner and breakfast too!

Tuesday, October 21, 2008

HD’s Unforgiving Clarity


We talk more over phones because we have mobile phones, we send more and more text messages because it is the supreme unobtrusive way to communicate, the moment we are in a unfamiliar place we pull out our mobile phones and try to look busy, if we reach the cinema hall before our friends, we try to keep our eyes anchored to luminescence from the mobile phone screen, for we do not want to ‘feel’ vacant or not in control.

There are a lot of things that personal technologies make us do, which we perhaps would not have done otherwise! However, ‘dressing up for technology’ is something that I believe is new. That is precisely the impact that High Definition. High definition tends to amplify details up to six times more than standard definition. It means that talc-based makeup would actually be accentuating, instead of hiding, those pimples, pigmentation, eye bags, enlarged pores and, horror of horrors - wrinkles when viewed on high-def TV.
With high definition cameras now offering 720p HD video-recording capability, it'd just be a matter of time before our cameraman goes HD and we'll have to prepare ourselves for a high definition recording. This basically translates to talc-based makeup now accentuating, instead of hiding, those pimples, pigmentation, eyebags, enlarged pores and, horror of horrors, wrinkles when viewed on high-def TV.

This is bad news not just for TV / Movie personalities but also blushing brides obsessed with their appearance before the unforgiving clarity of high-definition.
No wonder that preeminent companies who are offering makeup that can ‘counter’ the new High Definition Recordings are also collaborating with Display and Consumer Electronic Technology Companies. Samsung has tied up with Make Up For Ever to present a workshop that brought together Make up and High Definition.

To me, high definition which is a kind of HiFi (High Fidelity) for the viewer is actually making the life of those on camera difficult. It would be all the more challenging for performers to manage their looks when they are being recorded during live shows. I hope I do not sound too harsh when I say this but in the future, the beauty of near perfect display technologies just might bring out the hitherto hidden ugliness of show business.

Saturday, October 11, 2008

Personal Computer?

Since the dawn of micro computing era, we have always prefixed ‘personal’ to the name computer and the common assumption is that one computer is for one person. Yes, most of the leading operating systems ‘Do’ offer multiple user login option, but if I were to go with my observation in developed countries and even among the urban users in a developing country, one computer is still mainly used by one person. Also, with the demand for enterprise computers showing signs of sowdown – home or personal computing is the biggest potential revenue stream for PC hardware makers, to say the least HP’s marketing communication crusade to make people feel how their (HP) Computers were more personal again (‘The computer is personal again!’ campaign)

But why is a personal computer supposed to be a personal computer?
My thoughts on this fundamental question were triggered by something very interesting that I read at www.darumainteractive.blogspot.com Marcus had a very engaging point of view, in his post about Microsoft Surface. He aptly pointed out that when we interact with a device like the Surface, we could be breaking a fundamental paradigm – the paradigm of ‘my computer’. The large and multi-touch screen is an open invitation to engage many people at the same time. And this is exactly what makes this device (and perhaps all future large screen multi-touch devices) different from the Personal Computer. To me and going with present day user habits and perceptions, it would be difficult to convince people to do personal computing through such an interface. We perhaps, are expecting to change very well-entrenched usage behaviour. Having said that, I must also must modify the original quote ‘great design dissolves in behaviour’ and instead say ‘path breaking technology & design can dissolve behaviour’.
Leaving hope aside and moving closer to today’s reality – I see such large screen multi-touch devices having greater potential as in home or out of home, but indoor, entertainment devices - something that the Microsoft Surface already promotes itself as, in many of its existing videos.

Going back to the earlier point about why and how the Personal Computer became personal, I feel the answer lies in the limited capability of prevailing display technologies when micro computing started.
It might sound ironical but to me, it seems that the peripheral (the screen) limited the scope of the computing device. Over a period of time (more than 50 years) when we became used to the one screen one person paradigm, in comes multi-touch, asking us to engage together and not just stay personal!
Is it unfair? May be it is, but at least we are getting the option! How we may want to use it will be more a function of how old we are and thus how entrenched our usage behaviour is or what kind of ‘new’ applications can companies develop for these devices.