TED2012 Day 1 “LittleBits” for big dreams

Here we go again, TED 2012 started yesterday in Long Beach. I decided this year to do a short blog post every day on the talk or conversation that most inspired me.  Yesterday we had the amazing TEDFellows make their presentations.  Wonderful to see these ideas being presented in such a passionate and clear way.  There were many great presentations but the one that struck me most was by Ayah Bdeir –  Founder of LittleBits . Essential LittleBits is concept where Lego meets engineering.

My younger daughter is a fascinated by building Lego models – but honestly there are just so many Pirates of the Caribbean ships you can build.  We need to find more interesting ways in which our children can experience the joy of building new things.  LittleBits create a fun, creative and powerful new way for kids to learn how to build great new products.  Here is how the LittleBits team describes what they do

“Just as LEGOs™ allow you to create complex structures with very little engineering knowledge, littleBits are simple, intuitive, space-sensitive blocks that make prototyping with sophisticated electronics a matter of snapping small magnets together. Each bit has a simple, unique function (light, sound, sensors, buttons, thresholds, pulse, motors, etc), and modules snap to make larger circuits. With a growing number of available modules, LittleBits aims to move electronics from late stages of the design process to its earliest ones, and from the hands of experts, to those of artists, makers, students and designers.

True to form the TED team included a LittleBits starter kits in our bag of goodies. Even though there were only a couple of pieces to connect I was fascinated by the simplicity of the parts and how they connected.

Here is a great video showing how to build and have fun with LittleBits. Congratulations to Ayah and her team for creating this great product.

As usual, your comments and thoughts are welcome.


Rethinking the Technology & IT Analyst Industry

Over my last twelve years working as a senior executive in the technology industry I have had an opportunity to engage with a broad section of technology and IT analysts and researchers – both from established firms (eg. Gartner, Forrester etc.), smaller more focused firms (eg. Altimeter Group) and of course the more recent phenomena of the blogger/independent analyst.

For the most part the people I have encountered are smart, have a good deal of  domain knowledge, are good communicators and care about providing timely and accurate analysis and advice.  But with all other things, there is a bell curve, there are some people that have amazing insight and I always learn from then, there are a whole bunch in the middle that are solid and sometimes can add good value and as always there are some that really should look to do something else with their time.

This post is not about the individual analysts it is about the analyst industry.

So the issue is not the people – the issue is the structure of the industry and the inherent incentives that lead to sub-optimal analysis and advice that is tainted by accusations of “pay to play”.  This is a topic that is not new, and has been discussed before.  The general complaint that analysts play both side of the game, they write about vendors and the industry but then also get paid by the vendors thus tainting their advice is an old one.

The reason I thought this topic was important to revisit is because (1) there have been some structural changes to the technology industry that make the current IT analyst model seem archaic and (2) I have some specific thoughts on how we might try and reform the industry.

Why Change is Even More Relevant Today: There are several important changes that have taken place in the technology industry that will require some rethinking of the traditional IT Analyst Industry.

Lack of Defined Categories:  Traditionally we have had very specific functional domain experts – the CRM expert, the BI expert etc.  I don’t think customers buy in categories any more – they buy solutions that transcend software category boundaries – thus making research papers focused on these categories less relevant.

Integration of Consumer & Enterprise: This is one of the bigger changes in the industry – the “consumerization of the enterprise”.  Now more than ever there is no classic enterprise software play.  As such, analysis and advice based on deep enterprise background, without the latest thinking on consumer sw trends (and just focusing on social media does not cut it) misses integrating a fundamental change in the industry.

The Rise of the Consumer as the Buyer: Traditional analyst work has focused on providing insight to the CIO and associated IT teams in enterprises.  Analysts spend a great deal of time with vendors and CIO – but the decision makers are increasingly the end users.  We still see very little end user based research at traditional analyst firms

Not Enough Focus on Start Ups: Research coverage is still based on large and medium sized vendors.  This is partly due to the influence of these vendors, partly because they can afford to pay consulting fees and therefore get more attention.  The reality is that startups is where the innovation happening and there is no effective model today to provide customers the timely effective insight on the innovation taking place with smaller companies.

What Can We Do – Some Suggestions: IT/Technology Analysts can play an important part in acting as sources of unbiased and informative research and analysis.  Here are some suggestions for the industry to consider.

Focus on Industry Segments not SW Categories: The buyer of software is seeking the solution to a problem. These challenges arise out of specific dynamics of an industry (eg. Retail, Banking etc.).  Analyst firms should build up much stronger expertise in industry knowledge to make the advice more relevant and specific.

Rate Analysts and Firms: The financial analyst industry has it partly right (notwithstanding the failure of analysis in the financial meltdown).  Equity analysts provide very specific recommendations and then based on their insight and accuracy they get a rating.  Top analysts and firms get paid more and have more influence – this seems the right approach.  I agree that it is marginally easier to rate the accuracy of financial analysts – but I am sure the industry can come up with a standard rating system that provides customers and consumers some insight on this topic.  There are plenty of examples and methods to choose from  – Yahoo even has a “Analyst Performance Center” for this purpose.  This would be a great business idea for an independent firm to provide analyst ratings for IT/Industry analysts – I bet customer and vendors would buy this research.

Transparency of Relationships: This will help address the “pay to play” topic.  I think specific analysts and firms should clearly make transparent their economic relationship to a vendor and this information must be attached to every report and visible on the firms website.  I think the preference would be to provide the dollar amount but that is probably going to far. A more radical approach to this problem – use “Buy Side” and “Sell Side” analysts.  You either work with customers only to advise them on deals etc. or you work with vendors only to write on their innovations.

Stop Using IT Lingo:  I have written about this in a previous blog posting “Why Words are Killing the Adoption of Innovation” Somehow we think that the more complicated the words the more insightful and important the analysis.  This could not be further from the truth.  The industry would be much better placed if they focused on the clarity and simplicity of the analysis.  Vendors already make it impossible to understand what they are really selling – sometimes analysts add to this confusion.

Foster Independent and Small Analyst Firms: The consolidation in the analyst industry has resulted in bigger firms with more market power – this is fine, but it should be balanced with smaller and independent firms that innovate on how they are trying to bring new research and analysis to the market.  Constellation Research is a new firm that is seeking to innovate in this area and I look forward to following their progress.

These are just a few suggestions for us to consider.  I am sure not everyone will agree with me and I am sure my analyst friends will have a relevant point of view based on their experience – I would welcome the feedback.

Hope this fosters some interesting discussion and “analysis” !


Changing How We Buy Enterprise Software !

I recently looked up the definition of Enterprise Software in Wikipedia and saw the following description: “Enterprise software, also known as enterprise application software (EAS), is software used in organizations, such as a business or government, as opposed to software chosen by individuals.”

The first part of the definition seemed good enough. It was the second part that struck me.  Enterprise software is something other than “software chosen by individuals”.

So here is the problem.  Enterprise software is usually purchased by the IT department and the Office of the CIO but is used by the average business or general user.  Now there are good reasons why the IT department needs to be involved, compatibility, integration, security, scalability etc. etc. etc.  However the voice of the end user seems to play a much smaller role than the case should be – it is not always “software chosen by individuals”.

So this is what creates the principal – agency problem in the purchase of enterprise software. The “Principal” (the IT Department) is supposed to fully represent the interests of the “Agent” (the end user or individual) and purchase software that always fully meets the needs of the end user – this often does not happen as evidenced by frequent complaints from end users.

So how can we solve this problem – how can those who are the primary users of business software gain more power to control what software is purchased by the IT department on their behalf.  Here are some logical suggestions.

1- The budget for enterprise software purchases should be controlled by the business units. This may seem like a radical suggestion (though it is tried sometimes) and has potential issues.  However, I am a strong believer in the theory that those who are most impacted by a decision should own the resources that dictate that decision.

2- A software decision team of 5 should make the decision – 3 users, 1 IT & 1 Finance Representative. The number can be different but my point is that the decision should be weighted towards the voice of the end user.  Now before some of you quickly point out that the end users don’t have all the knowledge or skills to make a decision – you can simply manage this by IT selecting from a list of solutions pre-approved by the end user representatives

3- Conduct a minimum 3 month pilot with at least 5% of the users. Yes I know this can be expensive, but vendors may want to consider having demo systems that can actually be used by potential users.  Nothing like actually using the software to determine if it will do the job. If it is possible to have two parallel demo systems in place by competitors that is even better.

4- Have minimum user experience ratings as part of the acceptance and payment criteria. One of the challenges of non-SAAS software is that once you have purchased it you are stuck with it whether you like it or not. Having a payment schedule over a year that partially rests upon user “happiness ratings” may be a good idea.  For SAAS software you could argue this is built in as you can stop paying after a couple of months if you don’t like the software.

Now before my vendor friends get upset that any or all of these suggestions will make the sales process longer and more complex I would say the following – the enterprise software industry has to finally realize that the “customer” is not a faceless corporate entity or even the IT department – it is the end/business user that will use the software on a day to day basis.

If you make the end user happy – you will sell more software – it is as simple as that.

So the “Right Question” is what can we do to ensure that the needs of end users are not only met but their wildest expectations are exceeded.  This is what drives consumer software and this is what should drive enterprise software because we are selling to the same people !

As always I appreciate your comments and input on this post.



Pareto and Software !

 Vilfredo Pareto is one of my all time hero’s.  His famous 80/20 rule has on numerous occasions saved me a lot of time and effort.  It is actually quite incredible how often this simple rule that 80% of effects come from 20% of the causes shapes our thinking and our actions. 

It is equally incredible how often we ignore this powerful theory and continue to hope that the results will be different if we only keep throwing resources at a problem.  The reason I wanted to invoke the memory of Pareto and his famous principle was to explore its application towards the benefits we get from software solutions.

Now I am a firm believer in the benefits of software and how it can and does improve our lives, our businesses and our global economy.  But here is the Right Question:  At what point do additional improvements or added functionality in a software product make little or no difference in enabling a user to get his/her job done.

Let’s take MS Excel as an example.  I would consider myself a moderately sophisticated user of Excel.  I have been using Excel for many years especially during my time as a investment banker.   Excel was first released in the mid- 1980’s so it has been around for over 25 years.  There have been significant improvements in Excel since those early days in user experience, functionality, integration with other programs etc.

But here is the issue. I cannot quantify this but I am pretty sure that in my best Excel moments I do not use more than 10-15% of Excel’s vast capabilities.  Yes there are probably some people who use maybe 30-40% but it is more likely that the vast majority use only a small fraction of its formidable capability.

Now let’s look at an example from the world of enterprise software – in particular CRM (Customer Relationship Management software).  Now the only goal of CRM is to drive sales in a cost effective manner.   There should be no other objective for deploying CRM software.  If your company does not have CRM software you can certainly benefit from CRM software at the appropriate stage of scale (no a two person company does not need CRM they just need a piece of paper and a pencil !).  But similar to my example of Excel, at what point do you already get the 80% benefits from CRM software ? Is it at the first purchase, is it on release no. 4,  or do you ever get there ? 

I don’t know the answer and many will rightly argue that “it depends”.  This is always a difficult argument to win because it is a powerful argument – especially when you don’t have the courage to make a decision.  But as an executive or as a technology professional we are paid to make decisions not live in a land of “it depends economics”. 

So here is my assertion.  The right software can play a critical role in driving growth and managing costs for any business – here I have no doubts.  However, I would also argue that it is more important to have a broader and integrated technology footprint than to go deep (read deploying new versions) in any specific functional category.  So, better to have an integrated suite (eg. CRM, financials, supply chain, procurement, HR, mobile workforce etc.) than to buy the version 4.0 of any specific product.

If Pareto is right – and he almost always is – we probably use only 20% of any given software application capability to generate 80% value created. Interesting thought.  

I am sure many will disagree with me and I look forward to the comments and input.


What “Togetherville.com” can teach grownups !

I am pretty excited about the launch of Togetherville.com – the new online community for kids under 10.    As any of you who are parents of young kids know the technologies of today have created a whole new set of challenges for those of us trying to raise children in a safe and healthy environment.  While technology opens up a range of amazing opportunities for children it does pose daunting challenges around keeping kids safe online and exposing them to age appropriate content.

I am a strong believer in the benefits of social media, but have found that the privacy challenges of sites like Facebook  provide an uncomfortable level of security for children. With Togetherville you can create in essence a private community of children and adults that know each other and can therefore interact with peace of mind. Congratulations to the founders Mandeep Singh Dhillon and Raj Singh Tut for a great idea. Ofcourse yet again my friend Reid Hoffman has managed to support another potential killer app !

As I looked at Togetherville I was led to explore what this new venture could teach us grownups as we explore the future of technology.  My thoughts led to two insights that build on the approach followed by Togetherville.

First, I think online privacy violations are reaching unsuitable levels.  My challenge is not with situations where you knowingly give up information if asked.  My challenge is that more often than not privacy settings are opaque and difficult to understand and across sites there is no standard way to set a desired level of privacy. 

I think a partial answer to this lies in setting up a “universal online privacy standard” – same setting choices, same levels, same implications on privacy across all websites. Is this likely to happen, probably not, but it should.  Maybe even have a unique privacy setting attributed to you as a person that carries along with you as you surf the web and dynamically adjusts the settings of a website as you visit it.

Second, there is no effective way to manage age appropriate content exposure.  Online filtering programs work to some degree and have gotten better over time, but are far from be 100% effective.  I dont think you can or should control what people put online, but we need to find better ways to manage its exposure – this is especially true for video content (either user generated or professional).

So my solution to this challenge lies in open source and the movie rating system.  We clearly dont want a central authority telling us what we can and cannot watch and we also dont want them to a central authority to rate online content as they do for films (eg. PG13, R etc.).  However, I believe it would be interesting to explore an approach that provided “crowd sourced content rating”.  So for example, each website or piece of video content could have a tag that a user could provide input on (eg. 20 people rate a video as PG13, 600 at R – which would then provide some guidance to users and a better filer mechanism).  ultimately parents and kids make personal choices on what to watch, but at least such a system would provide a universal guide and benchmark to make informed decisions.

I believe by better addressing the issues related to online privacy and content the benefits of the internet can be more effectively consumed by both children and adults.  The problems will never be solved to our complete satisfaction (as they are not in the real world either), but more tangible progress can and should be made.

As usual, I welcome your comments and insights.


Your “Experiencing Self” vs. Your “Remembering Self” and the Implications for Software Design !

I have been attending TED for a couple of years and again this year was amazed by the speakers and their insights.  For those of you not familiar with TED I would suggest that you visit the TED website  where you will find a treasure chest of the most amazing talks on a broad range of subjects – I guarantee that you will be inspired.

TED 2010 did not disappoint – far from it.  Several talks inspired me personally but there was one that stood out for its simple yet profound insight – Daniel Kahneman’s talk on “The Riddle of experience vs. memory”.  Widely regarded as the world’s most influential living psychologist, Daniel Kahneman won the Nobel in Economics for his pioneering work in behavioral economics — exploring the irrational ways we make decisions about risk (TED description).  I have the deepest respect for people who can take the most complex of subjects and explain them in the most simplest of ways – this is was Daniel was able to do.

Now I will certainly not try to summarize or fully explain Daniel’s talk in this blog – for that I suggest that you visit the TED website to listen to the talk first hand.  Let me though try and provide you with the basic premise of his talk.  Daniel talks about  the “confusion between experience and memory: basically it’s between being happy in your life and being happy about your life or happy with your life”.  He provides several examples of the difference between the two. In one example Daniel talks about a person who listens to 20 minutes of glorious symphony music yet at the very end there is a dreadful screeching sound.  In reporting this incident the listener said that the screeching sound had “ruined the whole experience”.  Yet, as Daniel notes, the experience had not been ruined – ” What it had ruined were the memories of the experience. He had had the experience. He had had 20 minutes of glorious music. That counted for nothing because he was left with a memory; the memory was ruined, and the memory was all that he had gotten to keep.”

“What this is telling us, really, is that we might be thinking of ourselves and of other people in terms of two selves. There is an experiencing self, who lives in the present and knows the present, is capable of re-living the past, but basically it has only the present. It’s the experiencing self that the doctor approaches — you know, when the doctor asks, “Does it hurt now when I touch you here?” And then there is a remembering self, and the remembering self is the one that keeps score, and maintains the story of our life, and it’s the one that the doctor approaches in asking the question, “How have you been feeling lately?” or “How was your trip to Albania?” or something like that. Those are two very different entities, the experiencing self and the remembering self and getting confused between them is part of the mess of the notion of happiness. Now, the remembering self is a storyteller. And that really starts with a basic response of our memories –it starts immediately. We don’t only tell stories when we set out to tell stories. Our memory tells us stories, that is, what we get to keep from our experiences is a story. ” (quoted text is an excerpt from transcript of Daniel Kahneman’s 2010 TED Talk)

Implications for Software Design:  Daniel Kahneman’s talk and insights provide important lessons for the technology industry.  I think that we in the technology industry – especially the enterprise software industry – have forgotten how  important it is for users to be happy when using our software products.   Consumers take for granted that the software product will deliver on the basic function that it is designed to achieve – complete a purchase request, format a document or manage a supply chain.   However, all to often the software is difficult to use, not intuitive and requires too many steps to complete a simple task.

If you view Daniel’s full TED Talk you will note that in essence what he is saying is that your memory of a particular situation or event  matters more than the experience of that event or situation.  This insight can have important implications on how we design software to ensure that the memory of the use of the software is positive – even if the experience during the use was painful.  Maybe Apple had this figured out a long time ago !

I welcome your thoughts and ideas on this topic.



%d bloggers like this: