Subscribe to DSC Newsletter

Variable Metrics Formats for Mass Data Assignments


I created this blog to further discuss the issue of mass data assignments, a methodology that allows qualitative data events to be incorporated into metrics such as performance indicators. These assignments are routine for me now after having developed a prototype. However, I am unaware of the prevalence of this or similar techniques in the broader community. So I periodically work the topic into my blogs to help stimulate discussion. When quantitative data exists, it means that we had something to quantify. The data is less an aspect of the thing being measured than an extension of the people doing the measuring. It is not truth but an elaborate portrayal to help satisfy our needs. It is reasonable for people to have a need for data, but it is necessary to distinguish the metrics from the underlying reality. Further, to the greatest extent possible, the larger reality should remain connected to the metrics. In this blog post, I will be exploring the idea of maintaining steady qualitative events but exploiting variable metrics formats. As an example, I will be discussing the notion of mass data assignments in relation to an organization that no longer exists - based on its surviving management records. This is not an ideal situation decades after the closure of the organization. I did some research during my graduate studies that brought me to the agency's records, which are currently inaccessible to the general public. I hope readers find the material interesting despite its historic nature.

I remember when I first started studying stocks "really closely" many years ago, I had some interest in a technical approach making use of both price and volume. I described the product of a daily price fluctuation and its volume as a shift in sentiment. I later referred to the sum of these shifts as sentiment. I started using this metric en masse on many different stocks. I was surprised one day to discover - in the process of running an imaging system I had designed for sentiment - that the price of a stock created a reasonably close approximation; this reduced the need for the volume, which was perfectly fine given that I didn't always have volume available. But it was a fundamental shift in thinking for me to regard data in terms of both fluctuations and running totals. Once no longer shackled to volume, I found myself successfully using the imaging system on all sorts of data including data pertaining to earthquakes, electrocardiograms, and tidal levels. This experience led me to conclude that sometimes it can be helpful to step away from fluctuations; it can be better to think about the data as a net effect or sum of fluctuations. Conversely, it might be useful at times to dissolve or deconstruct data to reveal its fluctuations; or the fluctuations of its fluctuations. Being creative with metrics might seem illogical in a purely arithmetic sense - since things stop adding up in a customary manner. However, if the objective is to find a reasonable behavioural or algorithmic match, there is no need to cling any particular format.

Some readers might look at the image above and suggest that it contains a great deal of noise. The amount of noise is actually format-dependent. The focus of the illustration is to find patterns in the fluctuations. Given the absence of any cohesive pattern, does this mean that the data has no value? Of course my argument is that we are looking too closely at the illustration. This is not to say that stepping away from the screen is going to help. I am not saying that people are too close in terms of their physical proximity. The same way I made use of price fluctuations in my assessment of sentiment and then later decided just to use the price, I can do much the same using the above data. In the next illustration, although I use the same underlying data, I decided to plot the number of monthly cases rather than the monthly case fluctuations. As we go through these examples, consider the different potential uses given the "format" of the data. My point at this initial stage is really about the choice of formats and how this affects our perception.

Putting the issue of suitability and soundness aside, above we have the sort of illustration that is more hospitable to linear regression. Over the course of my real-life responsibilities, I normally evaluate data in this second format: it is well-suited for determining whether business is speeding up or slowing down, confirming seasonality, and assessing short-term directions. As confusing as the chart appears, perhaps many decision-makers can relate to its general premise: in order to evaluate the condition of a business, it is reasonable to compare monthly loads. Some might complain about my use of line- rather than bar-graphs; but it is quite difficult to see through bar graphs with this much activity. A bar graph is more plausible for shorter sampling periods. I accept the argument that this type of illustration might be more coherent and convincing if it had less data. (It might be easier to make a case using less complicated graphics.) The next image is from the sum of the caseloads rather than the sum of the fluctuations - yet another format from the same data. There is quite a metamorphosis.

On the above illustration, in what I call a "plough chart," we greatly impair our ability to detect seasonality; but on the other hand it is possible to obtain an overall perspective of business missing from the previous illustration. I can identify on this chart declining capacity for childcare and "surplus capacity" for consulting. My inferences are debatable decades after the surrounding events took place. I suggest that the agency assisted clients with their childcare needs less often because there was less childcare available. An alternate perspective that I cannot confirm is that the clients required less childcare because they were no longer having or raising children. I would say that rehabilitation capacity remained constant; and services operated at full capacity. I will go out on limb and propose that more consultants became available as the agency operated; there had to be greater demand for consulting services in order for the chart to appear as it does. On a more generic business level, I would say that the pattern for "consulting" reflects a developing market. This chart offers a significant amount of business guidance. The next illustration contains not the sum of the caseloads but rather the sum of the sum of the caseloads.

On the bar-graph insert, I highlight details from fluctuations between the last two periods, providing us with the same totals as the third illustration. While perhaps not particularly meaningful, this fourth format provides us with a rather curious running representation of performance over the recorded period of the organization's service history. The y-axis is no longer useful. Nor is there any activity within the line patterns that provide business or operational guidance. Although this particular illustration is not all that relevant for my own purposes, this is not to say that the format might not be significant in relation to some other type of data or application. The point really is that I might not already know how deconstructed the source data is; so it might indeed be necessary to add the totals of totals.

I once built a model replica of an actual landfill site. After completing the model, I was disturbed by its unusually flat appearance. Having walked over the landfill site on many occasions, I understood the area to be rather hilly. I asked a geography professor specializing in aerial interpretation and remote sensing for her thoughts. She explained that the world is actually rather smooth although it might not seem so for those walking over it. Relative to us, there appear to be hills; and indeed these are hills from our perspective. As we migrate from the smoothest illustration to the choppiest, it is like descending from high above the planet to its deepest parts. The bits of data important to us likely occupy particular stratum. The choppiest format can be found in the smoothest in a mathematically compressed form; therefore, it is possible to gain some understanding of all formats from just one. If data seems closely aligned to one particular format but not another, nonetheless we would have some insights. This is the general idea behind my use of variable metrics formats. At this time, the prototype is not designed to automatically scan different formats; this is mostly due to lack of processing power. Also, I tend not to incorporate quantitative metrics in my personal data, where the prototype has had the most influence. I normally use qualitative categories: e.g. terrible, bad, normal, good, and terrific. However, I have some quantitative metrics: e.g. pulse, pressure, and weight. On my prototype, a quantitative metric essentially reflects a prescriptive qualitative regime.

My rationale behind the use of different formats relates to the non-symmetrical or incongruent nature of qualitative events compared to metrics. Just trying to explain the basis for comparison is challenging - so dissimilar a qualitative event seems from a quantitative metric. I cannot assume that an event occupies the same ontological placement as a metric. If there is no prior knowledge of the algorithmic impacts of interaction, it is necessary to obtain a feel for "discursive congruence": this is the extent to which internally conceived phenomena coincide with the substantive boundaries set by an externality. It is difficult to say with any certainty that a discussion around the dinner table somehow led to the purchase of a new car; how a person's reflections after a television program may have motivated the decision to pursue a 4-year degree; and how an awkward moment during a blind date triggered a violent rampage in a school the following day. There is a fundamental need to overcome the disconnection: this is because the instruments and systems that hold the fabric of our particular society together are driven by numbers; yet the contributing events and resulting outcomes often evade quantification, thereby becoming invisible or imperceptible to our structural capital.

Some Background on the Organization

I will now provide some details of the organization responsible for the data. This real data is from the management records of an agency that operated for a number of decades - from the 1960s to the early 1990s. However, the data reflects only a 12-year period. The agency provided counselling services to support employees dealing with alcoholism, depression, and various workplace difficulties including stress. There was some talk of terminating the organization and offloading its responsibilities to other agencies. A case was made for decentralization and outsourcing: those responsible for financing the agency felt that the cost of counseling and disability benefits had gone out of control. The third illustration is useful for showing broad systemic patterns: near the end of the organization's operating life, some but not all of the costs associated with its services to employees were indeed sloping higher than the historic trend. This is despite the fact that services were meant to increase employee performance and reduce costs; it seemed like the investment in services had somehow led to higher costs.

From the illustrations, it should be apparent that a significant change occurred around the 8th or 9th year of operation. There was indeed an important development taking the world by storm at the time: computers had been introduced into work environments; this greatly increased the ability of organizations to handle higher workloads and offer services using surplus capacity. I believe that this is a notable administrative change taking place in the background. There was also something increasing demand for counselling services: computers in the work environment brought about a radically different kind of work setting - inside cubicles, behind desks, facing computer monitors, tapping all day long. So it is really exciting being exposed to the numbers decades later. I believe that the charts reveal the pangs of technological change.

Practice of Mass Data Assignment

I was writing some software originally intended to assist with performance evaluation and quality control. However, something unusual happened that caused me to change my use of the code. In quality control, it is customary to generate and keep track of events pertaining to quality: e.g. dented, loose, chipped, and peeling. On the other hand, in performance evaluations connected to quality, there is an attempt to assign quality events to departments, processes, and individuals. "Sarah" might be held responsible for 10 chipped units of product. Perhaps almost unintentionally one day, I decided that instead of using people or departments, I could assign events to different performance grades: e.g. 10 chipped units to "acceptable." Then instead of determining the performance of a team of individuals - e.g. James, Edna, Lucille, Ben, and Julia - I decided to assess a group of grades - e.g. terrible, bad, acceptable, good, and excellent. Thus, I could determine the events that seem more connected to excellent than good. This is my rather sketchy overview of how I started off on the road to mass data assignments. Sometimes in my blogs I refer to this process as "massive data assignment" or - although I deny any ownership - "big data assignment."

In terms to this current blog, my objective is to steer this discussion of mass data assignments towards the use of "variable metrics formats" rather than qualitative categories or grades. Recall near the end of the previous section, I speculated on the factors contributing to caseload increases: a combination of higher capacity and demand both probably related to technological change. Here then is one situation where it would be desirable to deliberately connect the metrics to a trail of events on which to base our conceptual understanding of the quantities expressed. Rather than simply know that caseloads increased during particular periods of time, researchers should be able to ascertain the extent to which historical circumstances are connected; and this information would be preserved in perpetuity. Mass data assignment is about giving body to metrics. Both the assignments and the determination of relevance are performed systematically. Abundant computer processing power is necessary to power the practice.

As I mentioned near the beginning of the blog, I am uncertain about the prevalence of mass data assignments (using a "mass" approach) in the broader community. I have certainly written on the subject for some time. I hesitate to describe myself as the person who first developed a mass approach; but I suppose it is safe to count myself among its pioneers. I leave it to members of the community to self-identify. I will always omit details that would allow for a complete emulation of my specific efforts. Even if a person had all of the software and existing files, it takes a great deal of work to maintain the system. At question really is level of commitment and passion for the cause. As some might discover at some point in the future, it is also necessary to have important character traits: honesty, willingness to embrace failure, and the ability to come to terms with truth.

Would it be possible for an honest person to miss the emergence of Isis? It takes a pathological liar to systematically miss something obvious; for the objective is not to understand but rather feel in control; to be rewarded for being in control; to receive praise and adulation. I remember a series of weight loss commercials where the celebrity promoter was obviously starting to gain a lot of weight; but she kept praising the weight loss program as if reality didn't matter. I suppose she had a contractual obligation. The fact that I can assign an enormous amount of data to different metrics doesn't alter the underlying weakness currently in the process: I have to select the metric intended to hold the assignment. So a mass approach requires a strong "operational" sense of ethics. What is the honest thing to do with the data? One must determine what is faithful and true in different operational circumstances. Even an extremely capable data scientist, if he or she lacks ethics, can reach faulty conclusions. On Tendril, because choices involving metrics are made repetitively, character weaknesses no matter how deeply buried can lead the algorithmic environment astray. That's my perspective anyways.

Assignment and the Choice of Formats

In the illustration above, I precariously provide some names for the different formats to assist with referencing. I believe that all of the formats have algorithmic value in relation to mass data assignments but not necessarily in terms of the resulting charts for visual analysis. Perhaps particularly perplexing is the fluctuations of fluctuations offered by trance (from where the data cannot get much choppier); and the sum of the sums by race (from where the data cannot get much smoother). Yet if an event seems strongly manifested in trance, it might also be present in trace albeit as a compressed aspect. If an event can exist in race, it might also be found expanded in pace. This is an interesting area for research and software development. I read in blog posts and forum threads all the time about how professionals in marketing wonder whether their strategies and investments are working; this is not a question that can be easily addressed through the use of metrics alone. The level of inference would have to be quite high without an approach connected to qualitative events. I added a fifth image on the bottom containing a distribution of qualitative events formed using a mass approach. This is mostly to give readers some reassurance that there is a greater purpose; the choice of metrics is only a starting point leading to a much higher level of analysis. In the chart, I identify a family of related events that all seem to have adverse consequences in relation to a number of related metrics. So that's an actionable insight: "doing this is harmful."

I have found that some phenomena can be connected to patterns resembling the different formats but which do not strictly follow the idea of fluctuations giving rise to sums (FTS). For instance, considering the totals for a month, somebody selling a particular product might sell more than 5 units 75 percent of the time; more than 10 units 50 percent of the time; more than 15 units 25 percent of the time. So I am describing a plough pattern or pace. The patterns for the individual days would more likely resemble trace. If we review the individual sales amounts, we might find a pattern like trance. The bulk orders and shipments perhaps behave like race. I suspect, the more lag exists in response to current events, the greater the likelihood of obtaining a sum-of-sum (SOS) pattern. If the events can be expressed as instances of something faster or more frequent, the greater the likelihood of a fluctuations-of-fluctuations (FOF) pattern. I am forced to speculate given the lack of resources for research; but I hope to pin down the dynamics in the future both in terms of the field scanning of metrics and also in relation to mass data assignments.

Why Accommodate Qualitative Events?

This entire blog has been about adapting metrics in order to try to better "fit" qualitative events. I think it is reasonable for a person to ask, why bother? Quantitative data is useful to the extent that we already have a strong understanding of the data, and our understanding reflects the nature of the data. For instance, I know what 3 cm represents: it is the physical length of something measured. If we ask a group of people what bullying represents, they might have varying responses; and so the nature of a score of 7 on bullying scale from 0 to 10 is evasive. I wrote about how qualitative events can sometimes have discursive congruence with metrics; it is this congruence that causes the metrics to reflect the nature of the data. The political conversations between coworkers in a tavern after work might have some relevance to actual election results to the extent that these discussions exhibit discursive congruence. The coworkers can of course talk about going fishing, their all-time favourite pastries, and the high cost of public transit. The congruence then becomes less direct and perhaps evasive. Nonetheless, life as we know it exists within this realm of vernacular discourse - sometimes shared, periodically highly personal, at times concealed from others. These are important albeit intangible events that are difficult to quantify. When we don't have a strong understanding of the data, or our perceptions only involve tiny aspects of its nature, the act of quantification leads us to faulty conclusions. We live a fabricated truth by imposing quantity over matters outside our epistemological authority.

In the past, I think there was a tendency to dismiss data that was quantitatively evasive. I don't wish to question members of the community that persist down this road. But there is room now given our computer processing capabilities to push the boundaries. Beyond the technical challenge of meaningfully incorporating qualitative events, there is also the market impacts of consider. Remaining in one frame of mind means selling the same software; perhaps all to the same market; causing companies to reach similar solutions; thus reducing the competitive benefits. It is worthwhile to diversify. There is no reason to be uncompromising. Ultimately, some companies will try new and different methods. However, I promote a mass approach not for companies specifically but rather the data science community. I think that the integration of qualitative and quantitative data will radically alter the discourse surrounding the collection and accessibility of data. My hope is to help the community make use of symbols and objects capable of containing high levels of abstraction perhaps as deep as thought. In previous blogs, I discussed the task of determining what events to throw in order to help define the parameters of systems. In this blog, I have been focused on finding the most appropriate metrics. However, since I am starting to reach the processing limits of my equipment, there will be fewer posts pertaining to these specific concerns in the future. I will therefore move on to less hardware-dependent topics. It is like I'm stuck in the 60s and 70s without proper equipment. So I leave it to the industry to keep up with my needs.

Views: 369

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Videos

  • Add Videos
  • View All

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service