At a recent event for SafeNet Amy Konary made an interesting comment based on analysis performed by IDC. On average end users of enterprise software felt as though they utilized only 30% or so of the availability of that application. That is, they were not getting benefit from up to 70% of the software’s features and functionality. (Amy please forgive me if I don’t have those %’s exactly right but it was in that range) The conclusion reached from this observation is that software should be better dissected so that people only pay for the pieces they want.

The conclusion seems logical enough but I’ve struggled with it ever since. Firstly, if it’s true, why have rational economic buyers still purchased software en masse? Many financial analysts believe that software spending was one of the first sectors to bounce back from the recent economic turmoil.Secondly, are there any other normally functioning markets that have similar traits? Lastly, why do end users on average only use 30% of the available capabilities?

Let’s start with the last question first. Competition drives continual improvements. Software is no different. Many software applications are relatively “new” – perhaps less than 5 to 10 years old and even an “old” software application is only 20 years. So it should come as no surprise that they’re continually and significantly evolving. Think of Moore’s law. Perhaps when you purchased the software 5 years ago 70% felt applicable. Now with advancements you’re at that 30% range. Secondly, applications tend to have major releases every one to two years. Companies however often upgrade far less frequently – perhaps every 5 to 7 years. The more critical the application the greater the hesitancy to risk doing an upgrade. Perhaps then customers see large fluctuations in value based upon where they are in their upgrade cycle. That though doesn’t address the entire question. Are there other examples of 30 – 50% utilization? One jumps out at me – the auto market.

Cars continue to evolve rapidly based on technological advances. Performance and safety are two large areas. A $35,000 sedan today probably performs as well and/or is safer than the top of the line Mercedes Benz of just 15 years ago. Most people are only capable – or never plan to – utilize the full extent of their cars handling and acceleration capabilities. Yet people still buy cars – all the time and in large volumes – where they knowingly will never use more than 30% of its performance or safety capabilities.

Perhaps we’ve discovered then that 30% might be a fluctuating amount and “snapshot” dependent. We’ve also discovered that there are other major industries where people willingly pay for features and performance they know they’ll never get close to utilizing. We still haven’t answered the “Why?”

Let’s look at the 30% functionality question. Let’s assume based on the above that it’s something in the 30-50% range. What does that really mean? The implied conclusion from the research is that if you spent/spend $100,000 for the software what you really should have spent is $30,000 to $50,000. Given you’re only getting 30% of the value shouldn’t you pay around 30 – 50% of the current cost? I agree, it seems logical.

I think where the logic breaks down, or to put it another way where they “irrational” behavior makes more sense, is in the ROI analysis. In my experience in sales, customers will not buy software unless they feel reasonably good about a 300% type of improvement. Given all the risks of buying and implementing software they need a very large return at a minimum to make any purchase. That is, if your package is $100,000 that CFO had better believe you solve a $300,000 problem or they’re not buying. That’s a minimum too – typically they expect it to be more like $500,000 to $1 million.

Therefore if they’re trying to solve a $500,000 to $1 million problem – and they believe you can help them do that – what’s $50 000? It’s why the “low price” leaders don’t always win. “Are we going to risk the return we set out to achieve – $600,000 – for a $50,000 “savings”? Incidentally, I think this last point was one completely misunderstood by those who thought the Open Source movement would completely change the software world – but that is an entirely other article.

Now, not all software is created equally or is designed to perform similar tasks. Many software type functions, once considered “exotic” have become commodity based plumbing – think Amazon web services. My final thought would be make sure you’re looking at the entire picture before creating the “buffet” approach to pricing. If your software price is only 30 – 40% of the entire project costs and the buyer will expect a minium of 300% type of return before taking on any project risk your ability to offer it to the customer in “parcels” may turn you from a Mercedes into a Toyota and not impact the overall economic discussion that much.