miércoles, 31 de agosto de 2011

Measuring knowledge management (Aug. 30)

There is an old adage that says that what gets measured gets managed (or, conversely that what does not get measured does not get managed). It seems then that in order to manage knowledge, one must be able to measure it. Furthermore, following Bose (2004), to be able to show the value of a knolwedge management system, it is imperative that this value be demostrated through metrics, if not straightforwardly through monetary value then, at least, throgh formally established anecdotal evidence. In fact, this is why measuring knolwedge (management) is not the same as measuring typical ROI, it refers to intangible assets that supplement financial measures. Bose collects several lists of metrics that can be employed to build intellectual capital indicators, for example: number of patents, customes and employee satisfaction, IT investment and literacy, training expense per employee, emplyee turnover, leadership, motvation, etc. While some of those are relatively easy to determine (it is just a matter of counting number of patents, for instance) some others are qualitative / subjective in nature and must be treated carefully, especially when they are turned into numeric values. For example, if an employee is asked to rate his satisfaction on a scale from 1 to 10, he or she might say 9 on a good day and 5 on a bady day... Moreover, it is not just a question of gathering metrics from a list, the key is to build indicators that make sense according to the (knowledge management) strategy. Bose propose a top-down way to go about building intellectual capital indicators: (1) defining the business concept or strategy; (2) identifying critical succes factors; (3) Selecting corresponding performance indicators; (4) assigning weights (priority, importance) to those indicators; (5) consolidating metrics (in a hierarchy); (6) generating a single intellectual capital index; and (7) using the index to guide management. Regardless of whether this may be too linear or rigid, it still points at two key aspects that must be accounted for. One, measuring is the result of a strategic purpose and the resulting measures should be used to guide (correct, improve, learn) the organization towards set goals. Two, the act of defining metrics, indicators, indexes and critical succes factors is a way to materialize and clarify the strategy and as such is full of (inter-subjective) value judgments and should be understood as a means, not an end in itself.

In terms of specific methods or frameworks for measuring knowledge management, Bose mention the Balanced Scorecard (BSC), the Skandia Navigator and Economic Value Added (EVA). Let's focus on the BSC. Kaplan and Norton introduced the BSC in the early 90s as way for companies to focus on (measuring) their intangible assets. After some use, in the late 90s, the BSC began being used as a strategic management system, which as mentioned above, would help translate the vision and strategy into specific goals and metrics at the department or individual level (Kaplan & Norton, 1996). The focus on intangible assets meant categorizing metrics into four perspectives: financial (traditional, tangible), internal business processes, customers, and learning and growth. This last one in particular is evidently tied to intellectual capital and meant that in the 2000s the BSC sarted being used for measuring knowledge management as well. For example, in Fairchild (2002) a porposal is made for leveraging KM through the BSC by mapping the BSC perspectives to intellectual capital (IC) perspectives as follows: the financial perspective would correspond to IC as such, for instance through the Skandia Navigator; the customer perpesctive would correspond to social capital (as we know, social capital has much to do with how customers perceive the company -  reputation, trust, etc.); the internal perspective would corespond to strucural aspects (in this case related to KM processes, skills and technology); and the learning and growth perspective would be mapped to human capital (e.g. training and satisfaction).

However, Voepel et al. (2006) raise a warning with respect to using the BSC from the point of view of innovation. They argue that the BSC is too rigid (having a predefined set of perspectives may not work for all types of organizations or organizational designs). It may also be too static because it places too much emphasis on uniform and hierarchical objectives (whereas innovation should be much more flexible). It is also mostly internal: despite having a customer perspective, it still refects only on the organization and not on its competition or partners which we have already seen as critical in an innovation mindset. It has a rather formal understanding of learning which may be equated to the STI-mode of learning discussed earlier in this course, which implies neglecting the DUI-mode (though Voelpel et al do not frame it this way). And, finally, it is too mechanistic (even bureaucratic). This last issue, however, could be said of any formal use of metrics or performance indicators and is perhaps the most difficutl challenge. How do we enable rigorous and traceable managament, without going against the flexibility required of knowledge managament aimed at dynamic capabilities and innovation? How do we use the measurements as an evolutionary improvement strategy and not as a end in itself where the quantitative results are more important than the creation of new knolwedge, products, services, etc?

martes, 23 de agosto de 2011

Intra and Inter-organizational Networks (aug. 23)

Throughout the course we have emphasized the networked character of knolwedge management, under the premise that knolwedge sharing both within and between firms holds the key to tap resources in order to be able to produce more significant and sustained innovations. On the one hand, intra-organizational networks (between members of the same organization) stimulate knolwedge sharing as well as enabling more localized decision-making. Galbraith et al. (2002)  call these, the lateral capabilties of an organization to highlight the fact that they overcome the rigidity of hierarchies. Since network structures emerge naturally (from the bottom-up), it is idfficult to provide specific institutional designs, let alone guarantee that they will actually emerge in an effective manner. Nonetheless, Galbraith et al. do provide certain mechanisms or tols that should catalyze or nurture such emergence. Given that Galbraith is pioneer of the "information-processing view of organizations" (owing much to Herbert Simon) it is only natural that they would suggest information technology as one such mechanism (and here one must think of knowledge manegement technologies in particular). But we should keep in mind that IT may create information processing capabilities as well as generating additional iformation processing needs (by, for example providing local decision-makers with more information than what they are used to handling). Additional mechanisms include: communities or pracice, annual retreats, and personnel co-location, among others.

Moving beyind the organization we then go into inter-organizational networs (connected to literature on business networks, virtual organizations, clusters, districts, etc.). This new level steps aside from the traditional "resource-based view of organizations" where the focus is on a firm's resources and capabilities, to a focus on network resources. Through Toyota's experience when entering the US in the 90s, Dyer and Hatch (2006) illustrate the competitive advantage that one organization can create out of identifying and supporting relation-specific capabilities, in this case between Toyota and its US suppliers. Since Toyota's strategy and IT are aligned in order to enhance the capabalities of their suppliers, one could rightly expect that such capabilities might also be exploited by Toyota's competition (given that they share the same suppliers). However, such capabilities are tied to a knowledge-sharing strategy which is difficult to replicate and which in effect neithr GM nor Ford, for example, were able to benefit from. The specificity of the relation between Toyota and its suppliers, coupled to replication barriers (the existence of rigid processes or lack of absorbtive capacity) actually implies that Toyota was able to gain sustained advantage from their knolwedge transfer without the risk of it being copied. Since resources are tied to the capability to exploit them, it was not possible for other manufacturers to use the knolwedge that Toyota shared with its suppliers, since they would also have had to redesign their associated processes, systems, standards, or even trivial but rigid elements such as the size of boxes. This supports the notion that knowledge sharing is a key source of competitive advantage, rather than the belief that knolwedge protection is more strategic. Nonetheless, this needs to be a continued effort, because despite the lengthy or costly learning curves and transofmration processes, it will still be the case that best practices will eventually be copied and the firm must always be developing new ones.

martes, 16 de agosto de 2011

Organizational Learning and Dynamic Capabilities (aug. 16)

Dynamic capabilities, according to Sher and Lee (2004) refer to the organization's way of responding in a rapidly changing environment, where a capability is understood as the adoption, integration and reconfiguration of skills, resources and functions to meet change. In order for dynamic capabiities to develop, Sher and Lee argue that an organization must take into consideration: (1) path-dependence (future decisions are influenced by past decisions); (2) double-loop learning (where single-loop learning referes to doing things better, double-loop is about doing better things); and (3) meta-routines (routines to learn routines). In sum, it is about knolwedge management of both exogeneous knolwedge (clients, suppliers, competition) and endogeneous knolwedge. Information technology then acts as a mediating variable between knowledge management and dynamic capabilities, enabling the transition from knolwedge and learning to being able to react adequately. However, in their study, Sher and Lee found that only ERPs and Data Warehouses were indeed influential factors in this relationship, while e-mail, document managemrnt and online search capabilities were not. This suggests that IT does not have a deterministic effect on this relationship and will in fact be dependent on the way in which specific organizations employ it to leverage knowledge management.

A different understanding of dynamic capabiltiies allows for a broader definition, where they refer to  a learned and stable pattern of collective activity through which the organization systematically generates and modifies its routines in pursuit of improved effectiveness (Zollo and Winter, 2002). In this view, the envinroment need not be rapidly changing for the orgrnization to adapt to it, the key is rather in the formal and persistent way in which the organization goes about this adaptation (an ad hoc cerative adaptation is not a dynamic capability). Dynamic capabilities are related to learning mechanisms, which can be (1) through experience and routines; (2) through articulating (implicit) knolwedge; or (3) through codifying (explicit) knolwedge. As a process, Zollo and Winter propose an evolutionary approach where dynamic capabilities emerge from a cycle going from variation (of explicit knolwedge), to selection (explicit), to replication (of tacit knolwedge) and finally to retention (tacit) and then back to variation. Along the way, it is a key factor that individuals and the organization as a whole be able to explicitly identify the connection between decisions /actions and performance.

It then becomes clear that the relationship between knolwedge management and dynamic capabilities is learning-oriented. In fact, a combination of organizational learning and knowledge sharing is what enables improved firm effectiveness (Yang, 2007). In particular, it is an effort aimed at preventing knowledge depreciation (out of employee turnover, obsolescenece, incomplete knolwedge transfer or difficult access). However, until recently most literature on organizational larning has focused on procedural (know -how) and declarative knowledge (know-what) while negelecting relational knolwedge (know-who), as claimed by Borgatti and Cross (2003). This is obviosuly changing in the context of a networked, knowledeg-based society where the Internet and social networking have become familiar interaction spaces. In this new environment, social ties may be weak (in which case they may help in finding a job, advancing professionally or distributing ideas) or strong (in which case they may foster knolwedeg transfer, especially when such knolwedge is tacit and complex). It has already been recognized for some time that such ties are more likely to emerge when there is homophily (kinship in terms of race, gender, age, education) or physical proximity. However, Borgatti and Cross also find that it all starts with the decision to seek information in the network and this depends on: knowing (who knows what), value (how much do I value the other's knolwedge), access (how easy is it to access the other's knolwedge) and cost (how costly is it to access the other's knowledge). Empirically, however, cost seems less important perhaps becase it is overshadowed by the urgency of the information need.

In summary: in order for an organization to develop dyamic capabiltiies that will enable it to stay in the game and improve its effectieveness, an adequate organizational learning strategy must be in place, with an emphasis on relational knowledge. Furthermore, this connection is enabled both srategically and technologically by knowledge management

martes, 2 de agosto de 2011

Business Intelligence and KM (Aug. 2)

There is a clear and growing relationship between business intelligence (BI) and knowledge management (KM). In Herschel and Yermish (2009) the view is that BI focuses on identifying trends or patterns in (typically large) explicit data warehouses. The aim is to aid in decision-making or in planning of corrective actions when the trend deviates from organizational goals. BI usually employs explicit quantitative data, which may imply an (excessive) emphasis in the technological tools and methods employed to analyze data, such as data mining and online analytical processing - OLAP). Accordingly, Herschel and Yermish argue that BI by itself is insufficient for generating value. Since we have already discussed the value creation emphasis of KM, then the idea is that BI be viewed as a subset of KM in order to make the technological capabilities useful and value-adding. For instance, an organization might place their BI activities and support tools in the context of a KM strategy led by learning objectives, using Nonaka and Takeuchi's SECI model for example.Also, since KM encompasses both explicit and tacit knowledge, a KM process should make knowledge explicit prior to employing BI techniques. One way in which this can be achieved is employing so-called knowledge exchange protocols. These offer a template and a structure for sharing knowledge, thereby stimulating and facilitating entry of new knowledge (from individuals into the system) as well as consumption of existing knowledge (from system to individual). An example protocol is presented by Herschel and Yermish: the SOAP protocol (not to be confused with service-oriented jargon) includes Subjective, Objective, Assessment and Plan categories. This template is applicable to different domains and enables an explicit categorization of tacit knowledge via text.

A different take on the relationship between KM and BI is offered by Cody et al. (2002). Out of their work at IBM research, the authors contend that the main difference between BI and KM is that the first is centered on (quantitative) explicit data analysis, while the latter is focused on tacit textual information. As such, the point is to move towards a single BIKM system in which it is possible to associate text with data. Often, the presence of meta-data will be enough to componentize text and link it to data, but in other cases there will be no meta-data or the link will not be explicit beforehand, requiring more preparation steps. In the end a single model with shared dimensions will integrate both text and data in order to improve the quality of business decisions (enriching the analyses offered by only using data).