Subscribe to DSC Newsletter


"Measurement owes its existence to Earth; estimation of quantity to measurement; calculation to estimation of quantity; balancing of chances to calculation; and victory to balancing of chances." - Sun Tzu, The Art of War (Translated by L. Giles)

The quote from Sun Tzu seems to suggest how a military leader gathers data; adapts to different situations; and makes decisions weighing the circumstances. It says that the balancing of chances depends on "calculation." I notice that military leaders are sometimes portrayed almost as dictators: the person in command asserts his or her authority leaving others to silently obey. They might be described as "calculating" although I haven't heard about nerdy military leaders relying heavily on their calculation skills. Perhaps leadership has become popularized as the ability to direct others rather than balancing competing risks and opportunities. In this blog, I will be separating two roles that some might lump together: 1) the manager's role to set criteria sometimes in the absence of data; and 2) the advisor's role to confirm the effectiveness of criteria on the presence of data. I will approach the distinction mostly in relation to the process of "quality control," where I consider the separation straightforward and easy to explain. In practice, the idea of quality is about defining and labeling: e.g. suitable versus non-suitable; acceptable versus non-acceptable; compliant versus non-compliant. On the surface the focus seems to be on the products being tested or checked. But actually, quality "control" is really about control over criteria: it is this control that determines how products are recognized. Any data collected pertains not to quality as an absolute but rather how facts relate to the criteria.

An article in the National Post describes how airport security personnel have been using a behavioural scoring system to determine which passengers might be terrorists. I know this might not leap out as a quality control situation. I would say that comparable processes are involved. There is a standard or listing of criteria. Instead of applying this list to different products, it is used on passengers. The scoring represents "criteria-driven recognition" of terrorists. Some people would question if the application of such criteria is genuinely effective. I certainly believe that airport security requires some kind of criteria to operate. I merely point out how having criteria does not in itself mean success. Airport security might be doing precisely the opposite of screening for terrorists: it could be setting a straighter path for them to follow or not follow. In the allocation of resources towards the detection of aberrant behaviour, airport security seems to require that terrorists be distinguishable from the general population by superficial characteristics. The Nazis distinguished between Aryans and other people using physiognomic characteristics. Arguably, this process holds some similarities to screening methods for quality control.

The presence of an institutional response does not mean that the underlying problem, issue, or concern has actually been addressed. When organizations decide to collect "only the most relevant data," it is important to recognize that in fact they might be collecting just the opposite. The setting of criteria has nothing to do with adaptation or improvement but rather ensuring that a particular course of action or method is pursued. The recent murder of a cartoonist in France was considered by many as an act of religious extremism and maybe terrorism. Some conversations in Canada questioned whether such cartoon portrayals represent a hate crime against a particular cultural or ethnic group. It's not a radical assertion. Even Pope Francis had some comments against ridiculing the faiths of other people. The social contexts of two nations might attach different meanings to the exact same events. It's possible that the disenfranchisement of minority groups can contribute to violent outbursts and deviant behaviours; but this reality would be "invisible" if social conventions allow the events to be construed only as acts of terrorism. Thus, the ensuing institutional response in France would seem to entirely support and reinforce preexisting social fragmentation. I could for instance just for laughs do a cartoon portraying French children as prostitutes.

The term "quality" is used in relation to the outcomes of production. When discussing the contributions of people in production processes, the usual term is "performance." Consequently, there are conceptual commonalities between performance and quality. I recall some recent research on hockey helmets previously approved by a standards agency. The helmets conform to criteria as embodied by our safety standards. The new research indicates that most of helmets seem to provide little if any protection from a type of injury common to those that play hockey. In human resources, there is a question on the extent to which tests can or should be used to distinguish between employees. In Canada, in a case involving a female fire-fighter, the employee was terminated from her job due to her inability to pass an aerobic test; however, it was determined that the testing was unrelated to her ability to perform her job. The employer was found to be in violation of human rights. I imagine that in both cases, the scientists involved in the testing did not refute but rather promoted their tests regimes. They weren't actually testing to gain insights but rather to confirm particular assertions. What is true of science is likewise the case with quality. Quality isn't in the object being tested but rather the criteria applied during testing.

In the United States, there was a multi-billion dollar funding package to clean up contaminated sites called the Superfund. I am familiar with a similar initiative in Canada. I would like to contrast the budgeting criteria against those of interested stakeholders. The allocation of funds might be determined on the basis of engineering risk assessments; this is not necessarily in agreement with regional objectives perhaps more aligned with long-term development. Thus the data obtained to determine compliance on the basis of one body of criteria might be out of place in relation to another even using exactly the same facts. Perhaps the same can be said of stimulus spending where one stakeholder might be interested in using money to fund expansion of transit services while the body providing the funding could be more interested in generating employment opportunities outside the city core. The data collected does not have an absolute value that is shared and interpreted the same way by all parties. This furthers the idea that data exists to support a particular management regime; this data tends to reflect the priorities and objectives of those that set the criteria.

Similar types of conflicts can occur within a particular agency for example in the department responsible for the military spending: should $20 million go towards the clean-up of a firing range or to dig up leaking underground storage tanks? If the criteria giving rise to quality conformance data were genuinely relevant (that is to say, connected in some way to reality), it should be possible to draw comparisons based on the nature of the relevancy. One would know whether the benefit of a particular project exceeds the benefit of another. But the application of conformance criteria is not at all about benefit. It's about ensuring implementation and completion in a specific way by a specified time. Since the conceptual assertion of "quality" from conformance data extends from criteria, this particular type of data would tend to reinforce contextual focus established by management. By the way, when I use the term "management," I do not mean people but rather the activity of managing things. Those individuals responsible for managing a firing range are not necessarily managers in a personnel administration sense. So "management criteria" are the standards, rules, guidelines, requirements, practices, policies, and procedures that determine how resources are managed.

In all of the cases above, the criteria serve to shape reality and therefore the meaning of any data collected. Criteria can cause somebody with certain behavioural characteristics to be labeled a terrorist. It can make products appear safe. A good employee can be portrayed as inadequate. Suffice it to say, there are many examples of criteria being used to assess aspects of suitability. Sometimes, the criteria are out of place. Or perhaps, they were appropriate at one point only to become less so as a result of change. I have described "projection" as the regime where criteria give rise to data relating to prescriptive conformance. I call the resulting data the "metrics of criteria." The prevalence of projection highlights the need to have certain specialists serve as a balance for managers. I'm unsure if these advisors should be described as managers themselves since they don't necessarily manage anything. However, there should always be somebody or perhaps something filling the advisory role.

Role of the Manager

A manager responsible for a particular production line might decide, after discussions with executives, that certain data must be collected for compliance while additional data would be desirable to have for decision-making. In terms of compliance, there are health and safety standards that necessitate the collection of data such as the number of accidents and injuries. This manager is also concerned about attendance, inventory, and production. It should be apparent in this example, it is not necessary for a manager to have data in order to set criteria. In fact, this manager is setting the criteria in order to collect data. It is quite difficult to separate the criteria from the intent of the resulting data: the data is meant to help the manager satisfy specific needs. These needs precede and give rise to the data. Production demands dictate what data gets counted and therefore what aspects of reality materialize on the radar.

Role of the Advisor

There can be a separate individual, an advisor or specialist, sometimes responsible for the collection of the data but in any event responsible for the data. If the arrangement in an organization is hospitable, this advisor might be able to carry out the following:

i) confirm the extent to which the data collected seem to satisfy management criteria;

ii) determine what criteria the data collected seem to genuinely satisfy; and

iii) ascertain the type of data that should be collected to satisfy the criteria.

Notice how I avoid a debatable point: iv) revise or create new criteria in response to the data collected. Some probably consider this last item more of a management function; yet it really seems to belong on the advisor's list. This leads me to suggest that in certain respects, the advisor can in fact perform a management function maybe even better than a traditional manager. I believe that there is a spectrum of different types of jobs pertaining to projection. Perhaps a large number of jobs are more related to the regime containing the advisor, which I have described in previous blogs as "articulation."

Scoring the Worker and Product - Using Defined Criteria

It is understandable to think that effecting control over a production environment might also yield control over the outcomes of production: e.g. that tightly regulating the pace of an assembly line might help the manufacturer assert or maintain its placement in the market. The fact that an organization conforms to a regiment of behaviours can provide no assurance of success. If a company is struggling and headed in the wrong direction, following prescriptive criteria might guarantee failure. If an organization is experiencing criteria-related difficulties, I doubt that any amount of restructuring can alter the outcomes of production. It is possible for an organization to produce products that are faulty, dangerous, or that people do not care to purchase; and it might carry out these activities really well.

There are particular ways to collect data using defined criteria. Surveys containing closed-ended questions can be used. Simple checklists can confirm status or state perhaps using a behaviourally anchored list. If the production environment is repetitive or highly automated, data-loggers would be feasible. Any kind of reoccurring tabulation - e.g. number of cans, boxes, or skids - gains its meaning from external sources. A lot of counting confirms the presence of management controls over processes. The data-gathering tools and methods seem to be compatible with mass production and the commodification of labour. One would expect to find defined criteria in tightly controlled production environments: a food processing plant or pharmaceutical manufacturing facility can be expected to have many defined criteria ensuring the quality and safety of its products. Another reason to conform to prescribed criteria is to meet non-production objectives that are nonetheless socially desirable: e.g. international standards promoting environmental stewardship; rules and guidelines to meet local regulatory requirements; codes of conduct to give employees a sense of security and fair process. I am certainly not disputing the need to have criteria.

Scoring Everything Else - Undefined Criteria

I believe that it is routine for management criteria to be set without any means to confirm effectiveness. Even when confirmation occurs, there is no reason to believe that the criteria will remain effective perpetually. It is also important to accept the likelihood that conformance data, once it is collected, cannot be easily adapted to accommodate changes to criteria. This advisor might have to confront the following challenges: 1) collecting data beyond the scope of current criteria; and 2) confirming lack of effectiveness from data likely within the scope of criteria. I believe that these are challenging tasks. Consider how common it is for a new manager to approach a complex situation involving large amounts of data. This individual would hardly have a command over every aspect of production. However, he or she would likely have specific expectations as embodied in management criteria.

In a number of blogs, I describe my technique of using mass data assignments. I think that my colleagues would consider it my attempt to "keep track of everything." In fact, I try to "keep track of everything in relation to everyone." This is an ambitious undertaking involving many obscure algorithmic operations. Rather than elaborate on the technique here, I just want to underline how an advisor can or at least should attempt to adapt to any kind of management criteria. For the purpose of providing balance, it is not relevant whether or not the advisor agrees with the management criteria; nor is there a need for direct involvement in setting criteria for the role of the advisor to be useful. Keeping track of everything in relation to everyone is already difficult goal - and is itself a unique challenge. The balance is in the balancing - not in the controlling.

Screening, Eligibility, and Selection Criteria

A major grocery food chain in my area recently announced that it would be selling "ugly fruit." This is fruit that doesn't appear to be quite right. It seems that large amounts of fruit are routinely discarded due to odd shapes, discolorations, markings, and other apparent deformities. However, in all likelihood, the fruit is actually still fine to eat. A conventional criteria-driven approach to quality control can be used to screen out those misfits that seem out-of-place and unacceptable for consumption. But it's not necessarily the fruit that came to exist outside the norms of society. It might be that society has emerged outside the norms of the fruit. Not only this, our preconceptions have become so narrow that few shoppers have any idea what the "normal" fruit really is. Normal for us involves unnatural selection, genetic modification, and heavy amounts of screening.

Perhaps the most common use of management criteria for quality control has little to do with management or achieving quality. I suggest that in the quotidian - in a factory, an interview room, or some other setting - the main purpose of criteria is to exclude details - i.e. to screen out people, facts, methods, and products. In an effort to achieve its goals, I consider it common for a company to initiate processes of elimination. Hiring criteria might be justified to "get the cream of the crop" - i.e. exclude the least suitable candidates. These days the main purpose of criteria is probably to simplify decision-making in an increasingly complex business environment. Simplifying decision-making doesn't actually mean the problem has become simple. It means that the people doing the analysis have chosen to pose the problem as simple. The outcomes of excessive simplification might be entirely nonsensical and inappropriate.

I will share a recent experience purchasing some stocks. I'm unsure how others approach stock selection. At the initial stage, I usually consider the key statistics: e.g. the P/E ratio; changes to revenue; diluted EPS; and for me dividends. Investor data enables criteria-based quality checking: it supports the sort of rapid assessment that makes mass screening possible. Assuming I have selection criteria, I can quickly dismiss many dozens of candidates without giving the matter much thought. For some investors, this process of screening might be the entire selection process. It is a tempting proposition to be able to find a "winning formula" in criteria. It should be no surprise when organizations direct resources towards this type of intellectual capital. For me, the key statistics only provide a particular perspective. I recognize that the numbers have limitations. It is difficult to embed a great deal of information in data intended to conform to standard reporting methods.

I was studying a bank stock. I noticed some peculiarities in the statistics: i.e. negative numbers. For some investors, the story would end here. A certain percentage of people, unable to meet criteria, will choose to terminate their acquisition plans. After reviewing the company's latest quarter, their website, and other sites covering details about the company, I eventually decided to buy shares. I found opportunities for this bank that are not entirely banking related, taking recent technological acquisitions into account along with its unique social, cultural, and historical business setting. Some might question why I didn't just do the detailed analysis right from the beginning. There would be too many candidates to study. I would likely still be thinking matters over or considering giving up. It's important to have screening criteria: its development and determination can strategically affect what data gets collected; the meaning attached to that data; the conclusions drawn from the data; and the decisions made from those conclusions.

Lack of Balance

It's been said that guns don't kill people but rather that people kill people.  However, in society, people come and go. It's the structures that persist to oppress transients. Conformance data should be considered in relation both to its intent and capacity to limit the articulation of important environmental conditions. Data conceived from ineffective criteria can acquire permanence particularly in social settings, which I suggest includes many production environments. Although I work in evaluating quality, I don't consider myself a professional in this area. Discussions relating to quality do indeed tend to focus on adherence, conformance, and compliance usually to particular standards. I am quick to support safety standards and accept the importance of conformance data to promote public safety. However, I believe that businesses tend to place inadequate emphasis on balancing criteria and tangible outcomes.

Consider for example all of the effort that goes towards building an automobile - and the relative routine nature of safety recalls. Is the quality control system being used to truly test for safety, or is it simply ensuring conformance to production standards? I believe that at the point of checking, the focus tends to be on determining adherence to prescriptive criteria for instance to ensure safety. Whether or not the criteria actually lead to safer vehicles is a separate question; this is perhaps determined once accidents occur, insurance claims are filed, and law-suits are launched, as if production were insulated from the outcomes of production. I suspect that in some organizations, quality checking has merely become window dressing. If a database contains only conformance data - confirming that the automobiles conform to all listed criteria - that data might not mean a great deal. Management cannot possibly anticipate every problem during the development of criteria. It is a fundamental question, actually, the extent to which any top-down regime can bring about desirable outcomes.

Consider the prevalence of annual audits. In retail, these audits might be done to confirm inventories and their locations. However, it is quite a different level of checking to assess methods, procedures, and processes using codes, standards, and practices. Imagine enduring rigorous annual quality testing only to continue losing business against a competitor that makes no use of such things. How about a company implementing a professional dress code only to discover its customers turning to online services? In my part of the world, a leading cola maker recently decided to change its formulation to contain less sugar. It seems that consumers have been turning away from their products due to health concerns. It seems to have taken several decades the company to come to its conclusion - so completely insulated its structural capital is from reality. Nonetheless, the sugar-to-profits growth concept remains pervasive. I know that a leading donut retailer makes some of the most horrifically sugary donuts imaginable. It's a preconception that people spend more for sweetness. Sadly, the company has no way of knowing how I and other customers feel except through lack of orders. There is no adaptation. There is only trial and error. Really, a lab rat going through a maze can exhibit as much intelligence. Save yourself a suit and get a lab rat. I know that last sentence might not make sense to everyone.

I started off the blog with a quote from Sun Tzu about "balance." I said that lack of balance can cause any data collected essentially to extend management (i.e. projection): the data can become purely instrumental rather than representative of any underlying phenomena. Balance helps phenomena to participate in the data (i.e. articulation) - to exist outside the scope of prescriptive criteria. In order to reinforce the conceptual delineation between projection and articulation, I created an embodiment for balance - a managerial counterpart called an "advisor." The advisor makes use of special tools and methods to determine the extent to which managerial criteria are effective given different contexts. I gave a number of examples where the criteria appear to be disassociated from their outcomes; there seems to be no self-awareness or self-correction process to deal with faulty criteria. I said that as work and production environments become increasingly complex, it seems that the disconnection is likely to be furthered as managers attempt to pose the complexity in simpler terms. The simplification provides evidence of lack of balance. "These helmets are safe because the tests say so." "These people are terrorists according to our scoring system." Everything is straightforward when there is no need to examine outcomes, impacts, and consequences.

Views: 540

Tags: alienation, articulation, assurance, authority, consequences, control, criteria, delegation, disablement, disassociation, More…disconnection, guidelines, impacts, management, models, outcomes, philosophies, policies, practices, prescription, prescriptive, projection, quality, racism, regulations, reward, risk, rules, safety, sensitivity, separation, social, theory

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Comment by Don Philip Faithful on April 20, 2015 at 10:36am

Capturing and understanding deeper meaning of the data is such important issue. I will never stop writing about it. Being careful not to give too many details, I evaluate the work that people do. I generate a lot of data that might be regarded as "performance data." I am normally not responsible for the criteria by which people are judged; but there are often extremely good "business reasons" for the criteria. However, I recognize that employees are good sources of data about the internal production and external business environments. The "data" gathered from employees can be regarded in relation to environmental metrics (externally defined) rather than as mere assessments of performance (internally extended). In my research on stress, I frequently encountered questions about what the data actually represented: e.g. inability to perform or adapt; structural problems relating to the work environment; procedural problems relating to work processes. It's really enormous and questionable, the idea of trying to "define meaning" - what everything signifies and means. Yet we symbolically imbibe meaning to data as if its meaning were simple and social circumstances superfluous. So, yes, it's definitely a big concern that I think will require great study and conversation.

Comment by Richard Ordowich on April 20, 2015 at 8:04am

Don, great discussion. In my use of the term values I am referring to all values not just moral. What I hope for is a more socio technical approach to data. Data designers, managers, users etc. should all be Data Literate.

Understanding that data is not agnostic or truth or facts. To understand how the values affect the data and how data affects values. To consider the story behind the data, using principles of Data Literacy.

Comment by Don Philip Faithful on April 20, 2015 at 7:41am

That's a great comment, Richard.  I certainly don't want terms to create a snag since I'm very much in agreement with your post.  I believe that idealism can sometimes manifest itself in work environments as screening criteria.  For example, the firefighter's inability to pass an aerobic test many have originally been regarded as a functional requirement; but it had the effect of excluding women and older workers from employment opportunities.  I avoid proposing the prevalence of socially disabling idealism in the workforce due to the question or production and productivity.  For example, in a call centre, I cannot say that in fact the internal processes ostensibly intended to ensure particular levels of service involves idealism that can be attributed to managers.  (Locus of control = managers.)  But I definitely recognize elements of social construction in most workplace controls.  (Locus of control = business environment.)  Idealism suggest that if I change the manager, so too might productive behaviours change.  I realize this depends a bit on what we chose to regard as idealism.  I don't claim to be an authority.  I choose to associate my observations in relation to the production environment - to criticize business practices.  But this in no way negates a more polemic examination of the workplace that might connect to individuals:  e.g. workplace bullying might be evident in unattainable standards of performance.  I apologize for the brevity of this response and if my thoughts are a bit choppy at the moment.  I do my best to make time to respond to comments.

Comment by Richard Ordowich on April 20, 2015 at 7:00am

The overarching factor governing most of these examples is not quality but values. Data collection, processing and interpretation are all governed by the values of the individuals, groups, organizations and society. What data is collected, how it is collected, how it is disseminated and accessed are all determined by human values.

We seem to be trying to apply principles and processes from the physical world such as product quality on phenomena, data that lacks any intrinsic characteristics. Data is a symbol created by humans representing their values. Data is not fact but human constructed realities.  

We need to shift the conversation about data from the technical to the social. There are no technical solutions to terrorism, “big brother” or privacy and security until we understand human values. Society operates on values not data. Data is reflective of those values not independent of them. This is why it is so difficult to define data quality. Data quality is subjective because data is subjective. .

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service