There is an old adage that says that what gets measured gets managed (or, conversely that what does not get measured does not get managed). It seems then that in order to manage knowledge, one must be able to measure it. Furthermore, following Bose (2004), to be able to show the value of a knolwedge management system, it is imperative that this value be demostrated through metrics, if not straightforwardly through monetary value then, at least, throgh formally established anecdotal evidence. In fact, this is why measuring knolwedge (management) is not the same as measuring typical ROI, it refers to intangible assets that supplement financial measures. Bose collects several lists of metrics that can be employed to build intellectual capital indicators, for example: number of patents, customes and employee satisfaction, IT investment and literacy, training expense per employee, emplyee turnover, leadership, motvation, etc. While some of those are relatively easy to determine (it is just a matter of counting number of patents, for instance) some others are qualitative / subjective in nature and must be treated carefully, especially when they are turned into numeric values. For example, if an employee is asked to rate his satisfaction on a scale from 1 to 10, he or she might say 9 on a good day and 5 on a bady day... Moreover, it is not just a question of gathering metrics from a list, the key is to build indicators that make sense according to the (knowledge management) strategy. Bose propose a top-down way to go about building intellectual capital indicators: (1) defining the business concept or strategy; (2) identifying critical succes factors; (3) Selecting corresponding performance indicators; (4) assigning weights (priority, importance) to those indicators; (5) consolidating metrics (in a hierarchy); (6) generating a single intellectual capital index; and (7) using the index to guide management. Regardless of whether this may be too linear or rigid, it still points at two key aspects that must be accounted for. One, measuring is the result of a strategic purpose and the resulting measures should be used to guide (correct, improve, learn) the organization towards set goals. Two, the act of defining metrics, indicators, indexes and critical succes factors is a way to materialize and clarify the strategy and as such is full of (inter-subjective) value judgments and should be understood as a means, not an end in itself.
In terms of specific methods or frameworks for measuring knowledge management, Bose mention the Balanced Scorecard (BSC), the Skandia Navigator and Economic Value Added (EVA). Let's focus on the BSC. Kaplan and Norton introduced the BSC in the early 90s as way for companies to focus on (measuring) their intangible assets. After some use, in the late 90s, the BSC began being used as a strategic management system, which as mentioned above, would help translate the vision and strategy into specific goals and metrics at the department or individual level (Kaplan & Norton, 1996). The focus on intangible assets meant categorizing metrics into four perspectives: financial (traditional, tangible), internal business processes, customers, and learning and growth. This last one in particular is evidently tied to intellectual capital and meant that in the 2000s the BSC sarted being used for measuring knowledge management as well. For example, in Fairchild (2002) a porposal is made for leveraging KM through the BSC by mapping the BSC perspectives to intellectual capital (IC) perspectives as follows: the financial perspective would correspond to IC as such, for instance through the Skandia Navigator; the customer perpesctive would correspond to social capital (as we know, social capital has much to do with how customers perceive the company - reputation, trust, etc.); the internal perspective would corespond to strucural aspects (in this case related to KM processes, skills and technology); and the learning and growth perspective would be mapped to human capital (e.g. training and satisfaction).
However, Voepel et al. (2006) raise a warning with respect to using the BSC from the point of view of innovation. They argue that the BSC is too rigid (having a predefined set of perspectives may not work for all types of organizations or organizational designs). It may also be too static because it places too much emphasis on uniform and hierarchical objectives (whereas innovation should be much more flexible). It is also mostly internal: despite having a customer perspective, it still refects only on the organization and not on its competition or partners which we have already seen as critical in an innovation mindset. It has a rather formal understanding of learning which may be equated to the STI-mode of learning discussed earlier in this course, which implies neglecting the DUI-mode (though Voelpel et al do not frame it this way). And, finally, it is too mechanistic (even bureaucratic). This last issue, however, could be said of any formal use of metrics or performance indicators and is perhaps the most difficutl challenge. How do we enable rigorous and traceable managament, without going against the flexibility required of knowledge managament aimed at dynamic capabilities and innovation? How do we use the measurements as an evolutionary improvement strategy and not as a end in itself where the quantitative results are more important than the creation of new knolwedge, products, services, etc?