May The Force Not Be With You
More than half of the companies out there that deliver B2B software have some sort of electronic license management embedded in their products. The lion’s share of this market belongs to home grown technologies, an unsurprising fact considering that the one thing software companies do is create software. The pros and cons of build versus buy have been well documented, and isn’t the topic of this article. What mystifies me is the glaring lack of metrics when it comes to this highly pervasive and extremely important issue.
Electronic licenses are meant to control product usage, increase revenues due to misuse, provide greater insight into customer buying patterns, and expand addressable markets. And yet the notion of measuring the success of these objectives, and seeing how electronic licensing is impacting them is practically nonexistent. Planning and coordination around electronic licensing is certainly not without its challenges. It requires a unique level of interoperability among functional groups that typically don’t interact at most organizations. Engineering, Support, IT, Sales, Marketing, Product Management, and Operations rarely sit at the same table to discuss one common topic, and yet this is exactly what’s required of an electronic software licensing project. Everyone single one of those groups not only has a role to play, but is also regularly impacted by a licensing deployment. The most successful organizations create a license governance body, with cross functional representation. Decisions are coordinated, voted on, and there is appropriate oversight during the integration and deployment. Because this technology requires engineering and product management to be completely coordinated with IT, there is a level of complexity that isn’t typical of other infrastructures that are internally deployed. But even organizations that reach the nirvana of a well orchestrated license management deployment rarely go back to measure its efficacy.
So what is the gauge for success or lack thereof? Typically it’s the customer response. In my 17 years of involvement in this market the most common metric I’ve seen used is the number of support calls. If the support calls are high, the technology is frowned upon. Internal debates ensue about whether electronic licenses are “really worth it”. Since usually nothing else is being properly measured, a high number of support calls typically result in one of two outcomes: seeking out a new technology to replace what’s obviously a broken system, or removing electronic licensing altogether under the postulation that customer experience trumps all. What’s missing in all of this is data around what defines success. Most organizations have a set of desired outcomes when they decide to implement some level of electronic licensing. These can range from software piracy prevention to improved business intelligence, and everything in between. But rarely are metrics established during the outset around how these are being measured today, and what sort of data would deem the project a success. I recall speaking to a large software vendor about their electronic licensing solution, and one of their objectives was to prevent blatant product misuse among their student edition software. This was a large segment for them, and students are notorious for “sharing” software in violation of license agreements. In order to complement their electronic licensing, I suggested a tool that measures how much software is being shared on file sharing sites in order for them to gauge how effective their electronic licensing technology was. The response I received was surprising: “Now that we have electronic license enforcement this shouldn’t be an issue, so we don’t need to use a tool to monitor file sharing.” This eyes wide shut approach to electronic licensing is the norm in the software industry. There seemed to be no interest in establishing and measuring piracy rates in order to determine whether their efforts were successful or not.
Now I’m not suggesting adding cumbersome processes and increasing complexity, but establishing some rough metrics will give you a way to start measuring ROI in some fashion. If the goal of a licensing project is to monetize features that are being given away, then determine how you plan to measure the value of those features once you integrate electronic licensing. If the objective is to prevent piracy, then decide, at some level, what your current piracy rates are, and measure against them regularly to see the changes. Every organization has some core set of objectives, and it’s important to establish some base levels to measure those, so that you can revisit them once you have deployed an electronic licensing system. Customer support calls can often be the result of a poor implementation and doesn’t necessarily mean that you aren’t seeing a good ROI on your investment. It’s not uncommon that revenues are up in parallel with support issues. You can fix the latter while maintaining the benefit of the former.
I’m reminded of a scene from Star Wars where an eager Luke Skywalker is headed off on his first mission to destroy the dreaded death star. Full of ambition and zeal he’s determined to succeed but the data around him is overwhelming his innate instincts. The sagely voice of Obi Wan echoes in his head, and tells him to turn all systems off and rely on the Force. We all know the result: mission successful, death star destroyed; chalk one up to the Force. Well, unless your CIO is Luke Skywalker I recommend not relying on instincts, gut feeling, or anything that remotely resembles the Force. Us mortals need data, and establishing metrics to measure license management efficacy will improve your ability to determine whether your investment is successful, and if not, how to make it so.