Subscribe to DSC Newsletter

In recent blogs, I have been distinguishing between quantitative data and narrative data. I believe that I separated the two forms relatively well. Although I originally focused on the differences in data in order to give narrative "its own space," actually there can be a symbiotic relationship between the two types of data. In my last blog, I said that quantitative data can be incorporated into narrative data. In my submission today, I will be discussing how the narrative can be used to develop metrics. There are various reasons one might wish to do so. If the narrative is repetitive - and there is some desire for its control or regulation - the metrics can be used to monitor future cases and determine the extent to which intervention has been successful. If there is a need to determine how to direct assets, the allocation can be premised on such metrics. Basically, any situation where metrics are currently used can benefit. The conceptual purpose of using narrative is to build around the circumstances of phenomena. Without this type of sensitivity, I would say that in most cases there would be institutional responses to problems followed by institutional perceptions; this renders the phenomena an extreme outsider in organizational processes.

My perspective was shaped early in my life. My undergraduate thesis was on the effectiveness of pubic participation in local planning. The key word is "effectiveness." It is possible for people to be involved in a process while at the same time being largely ignored. For instance, consider the act of shopping at a store. Data emerges at the point of purchase; perhaps it can be regarded as a vote of confidence for the retailer. The retailer can bring in analysts to study all sorts of patterns in revenue, product categories, and use of facilities. People are definitely "involved" in the process. But the extent of that involvement is confined - or one might say predefined. There are control and personal autonomy dynamics at play. People exist in data to the extent that they are permitted to do so. Their data is herded into stalls - and in the end it is manifested in the instrumental needs of the institution. As such, although a retailer might enjoy many shoppers for a period of time, it might actually know little about customers, and particularly what brought on their purchasing decisions. If shoppers stop coming one day, the ignorance of the institution becomes apparent, having no idea why they came in the first place. All of the analytics is merely window dressing, since it is premised not on phenomena but instrumentalism.

Somebody once expressed a desire to do my job, explaining that I at least get to handle a lot of numbers and data. I was astonished rather than flattered. Is it possible that "what I do" somehow seems appealing to the casual observer? While it is true that I handle a lot of data, I had to point out that my main role is fairly clerical. I basically check what people do, generate all sorts of data from their production, and create an endless number of charts. What I sometimes enjoy doing are the "highlights" that accompany the charts. It almost seems like the charts magically appear out of nowhere sometimes. I cross an enormous amount of jungle to get to each clearing. Over the course of these journeys, I handle databases full of metrics. A significant amount of my work is simply in the "handling." I share all of this to point out that I understand metrics. The metrics that I normally handle represent "control opportunities": specifically in my case, I mean that the metrics help to ensure that production criteria are met and standards are maintained.

When a person approaches a situation or problem with the intent of deploying metrics - e.g. as in my case for control opportunities - there tends to be a premise that might not necessarily be true. Metrics are premised on adequacy of knowledge. If an organization knows enough about something, it can gather metrics. Go ahead and gather information about ZLKDFIE. Do it now while I wait here on the computer screen. What kind of measurements should be taken - temperature, lumens, decibels, acidity, voltage, weight, volume? It is hardly clear what data should be collected. To take a tape to measure a length of pipe is to already know that the pipe has length; and that length is often relevant for people to make use of pipe. Similarly, quality is something asserted. A person has metrics to measure quality when quality is defined by criteria; the metrics are designed to determine level of conformance to criteria. What if this individual doesn't have enough understanding of the underlying phenomena to construct metrics? This is indeed the problem.

Narrative is premised on ignorance - or I guess the less embarrassing term would be inadequacy of knowledge. I believe that some would agree with my argument that science as we have known it is based on "apriorist deliberation" - thinking things through and making assertions on the absence of understanding. It is through the methodic testing of assertions where progress can be gradually made. There is no need for such deliberations in relation to narrative. In my use of the term "narrative," I am referring specifically to a technique of codified narrative such as BERLIN. BERLIN has a coherent methodology leading to outcomes that can be compiled by a computer. A person can start gathering codified narrative right away without setting up any kind of experiment, formulating hypotheses, or applying for a research grant. Nor are there symposiums or academic journals dedicated to the subject - although I guess an informal users group or workshop might be reasonable.

I have a narrative database. Just out of curiosity, I filtered for incidents of "dead body" in the codified narrative. In cases involving dead bodies on my narrative database, there is often an attribute called <false_portrayal>: this means that somebody is not portraying him or herself in an honest manner. Other major attributes include <forced_confinement> and <physical_ability>. Physical ability is an interesting attribute because it tends to mean that there is an ability issue such as disability or impairment. Interestingly, there is often a <person_girlfriend> involved. Among the major behaviours is [use.of_per] meaning that the perpetrator made use of some type of implement, along with [kill.of_per], [discover.of_inv], and [rescue.of_inv]. These all make a lot of sense in light the presence of a dead body. A *person_hero* is usually initiating action in relation to dead body cases, followed by *investigation_team*, *person_heroine*, *person_victim* (before dying of course), and *person_serialkiller*. Also prominent actors are *person_brother* and shockingly again *person_girlfriend*. I call the report containing a detailed breakdown of elements a "cross section."

In an organizational setting, perhaps the codified narrative would be less macabre. On my way the work this morning, I was listening to the head of an association of medical professionals. He was discussing the need for innovation in places like hospitals to prevent the accidental deaths of patients. He spoke about gathering more systems-orientated data so stakeholders can better understand what went wrong and how to correct problems. A quantitative approach is highly decontextualized. (It might be possible to determine how many things went wrong but not necessarily the context.) Here then is a situation where the organization might want to gather codified narrative in order to develop metrics. Metrics can be formed by deconstructing the narrative as I did with "dead body" and then examining these cross-sectional elements for control opportunities. What follows is a brief discussion of cross-sectional comparisons.

Uncontrolled and Controlled Cross-sectional Comparisons


An uncontrolled comparison is a comparison of elements "within" a cross section asserted to mean something - such as "bad situation." As in the case of my dead body, a cross section normally contains the following: <attributes>, [behaviours], and *circumstances*. Whether or not the assertion has merit is a separate issue. For example, perhaps *person_brother* and *person_girlfriend* are similarly invoked in cases not involving dead bodies. I recall that in the case of "The Singing Bone," a 2-pager by the Brothers Grimm, two brothers were indeed involved in fratricide along with an attractive lady. I guess fratricide is practically a biblical theme. In any event, I hope readers catch my gist. "Patient death" might be no less prevalent in narratives outside the given parameters. Consequently, a controlled cross-sectional comparison might be worthwhile.

It is possible to examine the elements of two cross sections by "subtracting" one from the other. Arithmetic subtraction of numbers normally results in a new total. The subtraction of a cross-section results in a distribution of elements. Visualize a double-sided ledger with a left side for "bad situation" and a right side for "good situation." Subtraction leads to a distribution over this double-sided ledger. The outcomes are not ideal since the cancellation of elements depends on sample size, the depth of the narrative extraction, and of course the depth of storytelling. Keep in mind that metrics from narrative should only be considered for "repetitive" day-to-day cases requiring routine intervention. (By the way, this remains a work in progress. I am still sorting out the exact details.)

Another type of cross-sectional comparison shown above might be useful for examining pathology or development: comparing the things that make a situation worse against those that make the situation better. These cross sections can likewise be subtracted resulting in a double-sided ledger. This kind of comparison might be worthwhile for determining which actions should or shouldn't be done in relation to a particular problem. The resulting metrics would quantitatively express narrative events.

Context Before Quantification

The idea of going "all narrative" stems from the fact that some scenarios rarely if ever repeat. For example, although terrorism is a big subject, and there are certainly a number of incidents, using the term "many" or "numerous" might be an exaggeration. The terrorist events that do occur might not reoccur in a similar or comparable manner if at all. One would have to consider terrorism by using hypothetical narratives for the corpus. On the other hand, metrics used by organizations tend to be associated with repetitive day-to-day processes such as those related to production. The corpus can therefore be experiential. Irrespective of constitution, whether quotidian or extraordinary, cross-sectional comparisons can provide guidance to formulate metrics. How exactly to go about formulating those metrics on an algorithmic level is a separate blog all together - along with any concerns about subtracting cross sections.

If I had to distinguish between a metric and a measurement, I would say that a metric has the effect of directing measurements in a particular manner. For example, the criteria surrounding a metric called "product quality" give rise to the data that eventually gets collected. The development of metrics seems to invite this process of top-down projection over the accumulation of data. In contrast, codified narrative surrounds the underlying phenomena. It is shaped by the articulation of phenomena. It is possible to develop metrics from the context of the phenomena itself, giving rise to what I call the "metrics of phenomena." Codified narrative is therefore an instrument of articulation. On the absence of knowledge, it seems contrary to all logic and reason to impose or project meaning over events or behaviours. The context should be set by the phenomena itself. This then is the strength of using the metrics of phenomena: the resulting data is sensitive to the context of the source as opposed to the context of the researchers.

Views: 330

Tags: articulation, berlin, capital, codified, contextual, corpus, design, expression, instrumentalism, insulation, More…management, narration, narrative, operational, organizational, pathology, processes, projection, qualitative, quantitative, research, stories, structural, systems, theory, transpositionalism

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Videos

  • Add Videos
  • View All

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service