Rethinking the Technology & IT Analyst Industry

Over my last twelve years working as a senior executive in the technology industry I have had an opportunity to engage with a broad section of technology and IT analysts and researchers – both from established firms (eg. Gartner, Forrester etc.), smaller more focused firms (eg. Altimeter Group) and of course the more recent phenomena of the blogger/independent analyst.

For the most part the people I have encountered are smart, have a good deal of  domain knowledge, are good communicators and care about providing timely and accurate analysis and advice.  But with all other things, there is a bell curve, there are some people that have amazing insight and I always learn from then, there are a whole bunch in the middle that are solid and sometimes can add good value and as always there are some that really should look to do something else with their time.

This post is not about the individual analysts it is about the analyst industry.

So the issue is not the people – the issue is the structure of the industry and the inherent incentives that lead to sub-optimal analysis and advice that is tainted by accusations of “pay to play”.  This is a topic that is not new, and has been discussed before.  The general complaint that analysts play both side of the game, they write about vendors and the industry but then also get paid by the vendors thus tainting their advice is an old one.

The reason I thought this topic was important to revisit is because (1) there have been some structural changes to the technology industry that make the current IT analyst model seem archaic and (2) I have some specific thoughts on how we might try and reform the industry.

Why Change is Even More Relevant Today: There are several important changes that have taken place in the technology industry that will require some rethinking of the traditional IT Analyst Industry.

Lack of Defined Categories:  Traditionally we have had very specific functional domain experts – the CRM expert, the BI expert etc.  I don’t think customers buy in categories any more – they buy solutions that transcend software category boundaries – thus making research papers focused on these categories less relevant.

Integration of Consumer & Enterprise: This is one of the bigger changes in the industry – the “consumerization of the enterprise”.  Now more than ever there is no classic enterprise software play.  As such, analysis and advice based on deep enterprise background, without the latest thinking on consumer sw trends (and just focusing on social media does not cut it) misses integrating a fundamental change in the industry.

The Rise of the Consumer as the Buyer: Traditional analyst work has focused on providing insight to the CIO and associated IT teams in enterprises.  Analysts spend a great deal of time with vendors and CIO – but the decision makers are increasingly the end users.  We still see very little end user based research at traditional analyst firms

Not Enough Focus on Start Ups: Research coverage is still based on large and medium sized vendors.  This is partly due to the influence of these vendors, partly because they can afford to pay consulting fees and therefore get more attention.  The reality is that startups is where the innovation happening and there is no effective model today to provide customers the timely effective insight on the innovation taking place with smaller companies.

What Can We Do – Some Suggestions: IT/Technology Analysts can play an important part in acting as sources of unbiased and informative research and analysis.  Here are some suggestions for the industry to consider.

Focus on Industry Segments not SW Categories: The buyer of software is seeking the solution to a problem. These challenges arise out of specific dynamics of an industry (eg. Retail, Banking etc.).  Analyst firms should build up much stronger expertise in industry knowledge to make the advice more relevant and specific.

Rate Analysts and Firms: The financial analyst industry has it partly right (notwithstanding the failure of analysis in the financial meltdown).  Equity analysts provide very specific recommendations and then based on their insight and accuracy they get a rating.  Top analysts and firms get paid more and have more influence – this seems the right approach.  I agree that it is marginally easier to rate the accuracy of financial analysts – but I am sure the industry can come up with a standard rating system that provides customers and consumers some insight on this topic.  There are plenty of examples and methods to choose from  – Yahoo even has a “Analyst Performance Center” for this purpose.  This would be a great business idea for an independent firm to provide analyst ratings for IT/Industry analysts – I bet customer and vendors would buy this research.

Transparency of Relationships: This will help address the “pay to play” topic.  I think specific analysts and firms should clearly make transparent their economic relationship to a vendor and this information must be attached to every report and visible on the firms website.  I think the preference would be to provide the dollar amount but that is probably going to far. A more radical approach to this problem – use “Buy Side” and “Sell Side” analysts.  You either work with customers only to advise them on deals etc. or you work with vendors only to write on their innovations.

Stop Using IT Lingo:  I have written about this in a previous blog posting “Why Words are Killing the Adoption of Innovation” Somehow we think that the more complicated the words the more insightful and important the analysis.  This could not be further from the truth.  The industry would be much better placed if they focused on the clarity and simplicity of the analysis.  Vendors already make it impossible to understand what they are really selling – sometimes analysts add to this confusion.

Foster Independent and Small Analyst Firms: The consolidation in the analyst industry has resulted in bigger firms with more market power – this is fine, but it should be balanced with smaller and independent firms that innovate on how they are trying to bring new research and analysis to the market.  Constellation Research is a new firm that is seeking to innovate in this area and I look forward to following their progress.

These are just a few suggestions for us to consider.  I am sure not everyone will agree with me and I am sure my analyst friends will have a relevant point of view based on their experience – I would welcome the feedback.

Hope this fosters some interesting discussion and “analysis” !

Zia.

Changing How We Buy Enterprise Software !

I recently looked up the definition of Enterprise Software in Wikipedia and saw the following description: “Enterprise software, also known as enterprise application software (EAS), is software used in organizations, such as a business or government, as opposed to software chosen by individuals.”

The first part of the definition seemed good enough. It was the second part that struck me.  Enterprise software is something other than “software chosen by individuals”.

So here is the problem.  Enterprise software is usually purchased by the IT department and the Office of the CIO but is used by the average business or general user.  Now there are good reasons why the IT department needs to be involved, compatibility, integration, security, scalability etc. etc. etc.  However the voice of the end user seems to play a much smaller role than the case should be – it is not always “software chosen by individuals”.

So this is what creates the principal – agency problem in the purchase of enterprise software. The “Principal” (the IT Department) is supposed to fully represent the interests of the “Agent” (the end user or individual) and purchase software that always fully meets the needs of the end user – this often does not happen as evidenced by frequent complaints from end users.

So how can we solve this problem – how can those who are the primary users of business software gain more power to control what software is purchased by the IT department on their behalf.  Here are some logical suggestions.

1- The budget for enterprise software purchases should be controlled by the business units. This may seem like a radical suggestion (though it is tried sometimes) and has potential issues.  However, I am a strong believer in the theory that those who are most impacted by a decision should own the resources that dictate that decision.

2- A software decision team of 5 should make the decision – 3 users, 1 IT & 1 Finance Representative. The number can be different but my point is that the decision should be weighted towards the voice of the end user.  Now before some of you quickly point out that the end users don’t have all the knowledge or skills to make a decision – you can simply manage this by IT selecting from a list of solutions pre-approved by the end user representatives

3- Conduct a minimum 3 month pilot with at least 5% of the users. Yes I know this can be expensive, but vendors may want to consider having demo systems that can actually be used by potential users.  Nothing like actually using the software to determine if it will do the job. If it is possible to have two parallel demo systems in place by competitors that is even better.

4- Have minimum user experience ratings as part of the acceptance and payment criteria. One of the challenges of non-SAAS software is that once you have purchased it you are stuck with it whether you like it or not. Having a payment schedule over a year that partially rests upon user “happiness ratings” may be a good idea.  For SAAS software you could argue this is built in as you can stop paying after a couple of months if you don’t like the software.

Now before my vendor friends get upset that any or all of these suggestions will make the sales process longer and more complex I would say the following – the enterprise software industry has to finally realize that the “customer” is not a faceless corporate entity or even the IT department – it is the end/business user that will use the software on a day to day basis.

If you make the end user happy – you will sell more software – it is as simple as that.

So the “Right Question” is what can we do to ensure that the needs of end users are not only met but their wildest expectations are exceeded.  This is what drives consumer software and this is what should drive enterprise software because we are selling to the same people !

As always I appreciate your comments and input on this post.

Thanks,

Zia.

Can Cities Talk ?

Can cities talk ?

It has been a couple of months since my last post – but I think I have a good reason. As many of you know I recently took on the CEO role at a company called Streetline Inc. Going into any new company takes its time and effort and hence the delayed posts. But here I am again – for better or worse !

So lets get to it. My “Right Question”. Can cities talk ?  Well first what do I mean by that.  I am talking about the exciting world of sensors aka “internet of things”, aka “smart grid”, aka “rfid tags” and the list goes on.  Sensor are a normal part of our every day existence.  We have sensors in cars, washing machines, phones, planes, elevators, machinery etc.  Sensors provide a pretty basic service – they “sense”.  What they sense can vary – it can be movement, temperature, magnetic level, pollution, and again the list goes on.

Over the years sensors have become more sophisticated and have had a significant impact on how we work and live.  In the 1990’s a movement began called the “Internet of Things”.  Started by the Auto ID Center (originally based in MIT) the idea was to create a network of objects that can talk to each other and to the internet.  This concept and its offshoots have continued to gain speed.  Originally there was a lot of excitement around RFID tags that could project object information and could be tracked through the supply chain, into supermarkets and even into your home.   So it is clear that we are now living in a world where objects are talking to each other and to us.

However, over these past years the proliferation of sensor technology has had its ups and downs – technology challenges, adoption issues, some times privacy concerns and many times a lack of focus on creating true economic value for stakeholders. Over the last 3-5 years though there has been a resurgence of sensor based technology popping up in areas where its potential impact is massive.

One area area that has gotten a lot of attention recently is the “Smart Grid”.  Essentially utility companies with the help of innovative start ups are starting to deploy sensors at electricity and gas meters, along the grid and even down to your appliances in your kitchen.  The primary purpose of this investment is to generate data or information. This information can then be used by consumers, utilities and companies to manage a “smarter grid”.  There are some exciting companies in this space – here are just a few you can look at for more information: Silver Spring Networks, eMeter, Tendril etc.

So now lets talk about cities – and specifically “Smart Cities”.  Well first question is how does a city become a “smart city”.  As usual when in doubt go to Wikipedia (I do it more so that we don’t have to waste time on definitions !)

“Smart cities can be identified (and ranked) along six main axes or dimensions. These axes are: a smart economy; smart mobility; a smart environment; smart people; smart living; and, finally, smart governance. These six axes connect with traditional regional and neoclassical theories of urban growth and development. In particular, the axes are based – respectively – on theories of regional competitiveness, transport and ICT economics, natural resources, human and social capital, quality of life, and participation of citizens in the governance of cities.

A city can be defined as ‘smart’ when investments in human and social capital and traditional (transport) and modern (ICT) communication infrastructure fuel sustainable economic development and a high quality of life, with a wise management of natural resources, through participatory governance”

That is a long and somewhat complicated definition – but does capture the essence of a “smart city”.   So here is the rub.  For any of the six dimensions we need one vital component – data/information.  How do you know if you have smart mobility or a smart environment if you cannot measure it and gather data.

So many of the smart city activities depends on two vital components – new sources of data that inform us and software that collects this data and allows us to make smarter, quicker and more informed decisions.

There is some great information and thought leadership from IBM on this topic. I have found IBM to have the most comprehensive vision and plan around their  Smart City initiatives – it leverages the use of sensors (both human and electronic), data and software to bring amazing new solutions to bear on the parts of the world that are growing the fastest and will pose the biggest challenge of our time – our cities.

So in order for a city to be smarter – it has to talk to us – it has to provide us new types of data so that we can better manage it. Let me use Streetline as a great example of a new technology that is helping cities talk to us (yes I know I am promoting my own company – but I know its technology and can talk about it in context).

Not smart parking !

Streetline is the leader in deploying ultra low power mesh sensor networks.  The idea is that these mesh networks can deploy sensors that allow a city to provide amazing new sources of data and information.  Our first focus area has been around smart parking.  Over 3o% of the traffic in a city is caused by people looking for parking – I am sure you have personally experienced this.  Streetline has developed a parking sensor that gets installed at every parking spot. Together with meter sensors we now have real time access to both parking payment information and vehicle presence information.  This is just the first step – in the future we hope to deploy sensors to monitor traffic, water pressure in fire hydrants, and to keep a real time track of if street lamps are working (when you have 50,000 + street lamps the saving potential is significant). All of these sensors provide real time data that can change the way a city operates.  This video by Good illustrates this concept much better than I can.

I will be writing more about this topic in the weeks ahead. We are entering a new phase of the “internet of things” where the technology is getting cheaper and better and the software (both web and mobile) is getting more sophisticated and easy to use.  I predict we will see a revolution in smart city technology over the next 5-10 years.

S0 yes I do think cities can talk and they can give us amazing new types of information that will change how we work and live.

As always your thoughtful comments and input are welcome.
Zia

Starbucks vs. Peet’s: Why the IPhone matters more than the coffee.

The coffee at Peet’s tastes better than Starbucks (at least to me). Peet’s offers free wifi while at Starbucks you have to pay for wifi through AT&T.  The price of a medium Café Latte at Peet’s and at Starbucks (I refuse to call a medium a “Grande”) in Palo Alto is the same @ $3.35 . So at Peet’s I get better tasting coffee, free wifi and at the same price – yet I go to Starbucks more often.

Why you ask. It is because of technology and the IPhone !

For those of you that are IPhone users and coffee drinkers you will have no doubt downloaded the Starbucks locater application. An easy to use app with fully enabled location-based service the myStarbucks application finds the closest store, lets you know the store hours, get directions and even lets you invite a friend directly from the application. Now even though I prefer Peet’s coffee to Starbucks, when I am in an area I am not familiar with there is no way for me to locate a Peet’s coffee easily. As such, I simply press a couple of buttons on my IPhone and it instantly tells me the closest Starbucks and gives me directions on how to get there.

Now to make it even more easier, Starbucks is piloting a new mobile payment application that allows you to pay for your coffee using your IPhone.  Essentially you pre-load money into the application and after ordering your drinks swipe the barcode from your IPhone app over the barcode reader at the store and presto you are done. Starbucks is piloting this application at selected stores in the Silicon Valley and Seattle.

I am sure that because of these simple and easy to use IPhone applications Starbucks is attracting more customers to its stores.  It only costs $20,000 – $30,000 to build an IPhone application – a small amount compared to what I am sure is spent on traditional advertising.   You would think that Peet’s would figure this out and realize that they are losing customers to Starbucks simply because people find it harder to locate their stores.

Innovative, easy to use and relevant mobile applications are changing the way we work and play.   Companies that truly take the time to understand the needs of their customers and provide such powerful yet simple mobile solutions have much to gain.  

Zia.

Open Government – are Data.Gov and Apps.Gov delivering on their promise ?

There is certainly a long list of challenges facing the Obama administration – the economy, healthcare, and two wars just to name a few.  Regardless of your politics, I think there is one aspect of the Administration’s efforts that require further discussion and exploration.  On his first day in office President Obama signed the Memorandum on Transparency and Open Government.  The memorandum outlined a commitment to “creating an unprecedented level of openness in Government…” It promised to “ensure the public trust and establish a system of transparency, public participation, and collaboration.”  My intent in this post is not to have a broader discussion on the topic of the Administration’s openness, but rather to explore two very specific components of that pledge – the launch of Data.Gov and Apps.Gov

As part of his focus on technology as a key driver of government effectiveness, openness and efficiency President Obama appointed two impressive and accomplished executives to lead this effort:   Vivek Kundra (Federal CIO) and Aneesh Chopra (Federal CTO).  I have had the privilege of meeting and talking to both Vivek and Aneesh and have been impressed with their plans to leverage technology, especially Web 2.0 and Social Media, to provide enhanced services to citizens.  Data.Gov and Apps.Gov are two important components of that effort.

Data.Gov was launched in 2009.  The stated objective of Data.Gov is to” increase public access to high value, machine readable datasets generated by the Executive Branch of the Federal Government.”  Data.Gov provides three kinds of data catalogs.  “Raw” Data Catalog: a catalog with instant view/download of platform-independent, machine readable data (e.g., XML, CSV, KMZ/KML, or shape file formats).  Tools Catalog: a catalog to provide the public with simple, application-driven access to Federal data with hyperlinks. This catalog features widgets, data mining and extraction tools, applications, and other services. Geodata Catalog:  a catalog that includes trusted and authoritative Federal geospatial data. This catalog includes links to download the datasets and a metadata page with details on the datasets, as well as links to more detailed Federal Geographic Data Committee (FGDC) metadata information. (source: data.gov faq)

Currently the data set includes 1,078 Raw Data, 484 Tools and 167,394 Geodata records.   A review of the data currently available” by Agency” provides some interesting insight.  The US EPA had 6,151 downloads of data the week prior to Feb 8th, 2010.  The Department of the Interior and the US Treasury came in second and third with 4,352 and 4,079 downloads,  respectively.  The US EPA also had the most raw data sets at 426 while the lowest number of data sets came from the US Consumer Product Safety Commission at zero (yes that is zero – somehow this made me a little nervous !).

The US Government and its many agencies produce massive amounts of data each year.  By providing academics, researchers and companies access to this data we may enable  individual researcher to find a cure for cancer or a college department to discover a weather pattern that can prevent natural disasters.  This is the power of open access to data – for the people, by the people !

Apps.Gov  is a very interesting and potentially powerful initiative.   Essentially this is a private cloud for the US Government.   Managed by the General Services Administration (GSA), Apps.Gov includes Business Apps, Productivity Apps, Social Media Apps and Cloud IT Services. The platform/exchange is similar to other successful private sector application exchanges such as the  SAP EcoHub , the Salesforce AppExchange and of course the  Apple Iphone App Store.

Apps.Gov provides government agencies a single marketplace to buy and use a broad range of applications.  In the Business Apps section for example  HP has 526 solutions listed, Microsoft has 65, VMWare has 716 and Salesforce has 67.  Several other companies have multiple solutions available.  Apps.Gov could have a profound impact on how the US Government buys and consumes software. 

So here is the Right Question:  Have data.gov and apps.gov delivered on their promise of fostering an open, efficient and effective government ?   Are they on the right track and what would you do different ?

I would welcome your views and opinions and especially your stories if you have used data from these sites or have any other experience related to this effort.

Thanks.

Zia.

Photo Credit: Ian-s

%d bloggers like this: