Subscribe to DSC Newsletter

Standardized Performance Friendly to Big Data

Somebody once mentioned to me that there is a need for a standard method of performance evaluation that can be applied to all employees regardless of their exact duties: e.g. to compare a janitor to an accountant. In my jurisdiction, there is a regulatory requirement for "equal pay for work of equal value" that can affect companies with government contracts. I consider the concept of "equal value" complicated due to its subjective nature. Certainly two people handling exactly the same work might be compensated similarly. But we also have to consider level of risk; level of responsibility; hours of work; and of course actual performance level. A related discussion surrounds the exact meaning of the term "work," which itself is a rather deep issue. Taking into account all of the productive resources of an organization, I believe that the debate should consider the relative merits of different assets; but this assertion adds to the complexity of the performance evaluation. It is a Holy Grail of sorts being able to determine how different programs, projects, policies, practices, behaviours, and capital resources contribute to particular outcomes. I find that the following metrics share some structural similarities: 1) effectiveness for instance in relation to marketing campaigns; 2) performance in terms of human resources; 3) quality, reliability, and conformity for products and services; and 4) change and adaptation strategies leading to restructuring and organizational development. Any method that applies to one area might also be applicable to another, offering many business benefits once fully articulated. How then might it be possible to determine any kind of standard performance? I will be using some simulated data and a process that I call the TIME methodology to examine this question.

Simulation Parameters

The simulation that I prepared for this blog is meant to be simple enough for readers to anticipate the outcomes. The performance of workers in this simulation is determined by their levels of production. Each worker contributes to the production of a single thing in the same manner and nothing else. So already this simulation is unrealistic since everybody has the exact same duties. Such a scenario is unlikely to occur in real life. Nonetheless, perhaps this is how comparisons are sometimes done between workers: an objectively verifiable metric is used as the basis to compare productivity. Consequently, Kathleen at reception cannot be compared to Janice in production using such an approach; attempting to do would create severe distortions in the evaluation. There are 11 agents in total: agents 0 to 10. Each cycle in the simulation contains a workgroup of 5 agents chosen randomly from the 11 available. I arbitrarily set the contribution levels for each agent as follows: agent 0 contributes 0 units; agent 1 contributes 1 unit; agent 2 contributes 2 units and so forth. The testing environment does not know the individual contribution levels: it only knows the total for each workgroup. The question then is whether performance can be determined based on the involvement of agents irrespective how much they produce individually (i.e. their amplitudes). However, keep in mind that in this simulation - just coincidentally to simplify the analysis - performance is the same as amplitude. This is a type of performance evaluation problem that any organization might face perhaps to determine bonuses, salary adjustments, and for management purposes. Some readers might complain that there are already ways to determine individual contribution levels. The simulation is meant to demonstrate how the TIME methodology handles performance. I will describe some additional benefits a bit later.


The TIME methodology is all about moving around information in a particular manner; this is more of an algorithmic rather than statistical process. Since the movement can involve large amounts of data, I find it reassuring to deposit controls in the data to assess the reliability of operations. I feel that the idea of Big Data is problematic from a practical standpoint in that small errors might be magnified through algorithmic recursion and repetition; this can lead to big mistakes. In response, I make use of diagnostics tests: I add control data to the field data. More or less, the image below shows the normal outcome of diagnostics testing. How events get distributed depends mostly on the construction of contextual relevance. A production simulation can have as its contextual relevance the number of units produced: the more units produced, the greater the performance. Therefore, production in this case is relevant to the context of performance. Although using units might help us to understand how the distribution relates to production, it is not necessary to rely on units of production using the TIME methodology. It is possible to make people busy in order to give the appearance of productivity; such forms of churning can lead to the exhaustion of scarce resources. So on the issue of contextual relevance, relying strictly on units of production is extremely foolish; it is also unnecessary given a more sophisticated path made accessible through the use of the TIME methodology.

Since there are 5 agents per group, I would expect the lowest score to be 0 + 1 + 2 + 3 + 4 = 10 units; the highest score should be 10 + 9 + 8 + 7 + 6 = 40 units. Accordingly, the distribution of events occurs between 10 and 40 as indicated on the diagnostics image above. Can the performance of agents be determined without knowing their individual contribution levels but by having workgroup production data? By and large, yes. As the next illustration shows, it is possible to determine relative performance by inference using workgroup participation rather than individual production amplitudes. I say that performance is relative since the actual contribution levels are not known; or at least they shouldn't be known. Agent 10 is ranked higher than agent 9; agent 9 higher than agent 8; agent 8 higher than agent 7; and so forth. Yes, I notice the distorted readings for agents 6, 3, and 0. Keep in mind that the relative differences in contribution were determined inferentially from random groupings. Just by chance, agents might find themselves in strong or weak groups that make them seem like better or worse performers than they actually are. A great performer might be matched up with terrible groups. A terrible performer might be assigned to great groups. Therefore, distortion can occur in the case of random groupings; perhaps these dynamics describe the situation even in real life.


It might be tempting to use an algebraic approach to determine contribution levels. Algebra delivers misleading clarity. In real life when people are placed into groups, performance depends on the group. This is the whole idea of having groups. If the total were equal to the sum of the parts, there would hardly be a business need to have groups. In reality, a worker might perform really well in one group and poorly in another. More importantly, a group might perform better with particular workers and worse with others. The reality of performance created by and within a group might not exist once that group is dismantled and its resources redistributed. Nor in practice is every person in a group responsible for production-related activities. An organization taken as a whole cannot operate properly if it has only production personnel. The contribution of each member might not be apparent all the time or within every context. Nonetheless from this simulation where we coincidentally know the individual contribution levels in advance, it seems that the TIME methodology provides a reasonably good ranking.


Conceptual Framework


There is a saying that we shouldn't compare apples to oranges. I believe that it would be more accurate to say, we shouldn't compare a quantity of apples to a quantity or oranges. The numerical relevance of one does not translate in the same manner to the other. Of course we can compare fruit. We just cannot say that 1 mango = 1 banana or a sum mangos mean the same thing as a sum of bananas since there is no equivalency in amplitudes. Numbers only allow for numerical comparison. Bananas and mangos are still comparable but just not mathematically. In the simulation, I make use of units of production; this provides for a comparable basis. The use of comparable amplitudes creates a barrier preventing the integration of mixed data. For example, a person might ask, how can market reports, absenteeism records, and production statistics be used together? The use of these data resources is instrumentally defined by the contexts in which they were produced. Once the data is disembodied or disassociated from their realms of discourse, it becomes necessary to recontextualize in order to attach relevance. I hardly ever use amplitudes since doing so demands a paradigm of sameness that distorts meaning.


Although the theory surrounding the TIME methodology might seem complicated, the conceptual dynamics are straightforward. Accordingly, I will put aside theory for now and just explain some basic concepts. TIME is an environment intended to support the deployment of algorithms. These algorithms take advantage of the fact that all of the data accumulated using the methodology has been contextualized. The algorithms determine the relationship between the data and the contexts. I have two additional images to share from the simulation. The first image is for agent 0. Recall that this individual literally contributes nothing to production. The second image is for agent 10 having the highest contribution level. I already describe the Crosswave Differential (XD) in earlier blog posts; so I won't really elaborate on its construction here. However, put simply, the XD is an algorithm that separates the contribution of an agent from the ambient level that exists without the agent in relation to established contexts. Agent 0 does not actually result in a contribution level of 0; but whenever agent 0 is included in a group, the overall the performance of the group declines. Of course, it would be reasonable to confirm agent 0's performance through conventional means assuming he or she is a production worker. When agent 0 participates in groups, the groups produce about 23 units. When agent 0 is absent from groups, production increases to about 30 units. (I am reading directly from the image.) The dark arrow shows the crosswave for agent 0. The light arrow indicates the crosswave when agent 0 is not participating in groups.



In the second image, the ambient crosswave pattern under the light arrow is closer to the centre while the pattern for agent 10 under the dark arrow is closer to the right. When agent 10 is part of a group, production is at about 31 units. In the absence of agent 10, ambient performance slips to about 24 units. So this is a fairly simple albeit rather instrumental explanation of performance. I suggest that a superficial use such illustrations is probably inappropriate for reviewing performance in real life. The context is just so estranged from the day-to-day realities of life in a workplace. Approach the use of algorithmic representation carefully since the contexts defined by management for assessment purposes might not lead to long-term business gains. A person in a group might not actually be responsible for producing units of tangible output. For instance, a manager might not be involved in production in a literal sense. Nonetheless, if it seems that groups produce more when agent 0 is replaced by somebody else, sadly from the standpoint of the contextual relevance set by management itself, the stars seem hostile to this individual. As I will explain shortly, the TIME methodology is designed to promote an inductive management environment that takes into account any number of possible considerations. There is no need to take a simplistic approach.



Recursive Symbolic Aggregation


I use the term "symbolic aggregation" in other posts to describe how events can give rise to contexts. Dozens or perhaps even hundreds of events can lead to a context or contexts; and this can then be used as an event to combine with other events leading to more contexts. As such, there isn't a limit to the number of events giving rise to any context; and there is no limit to the number of contexts giving rise to even more events. I certainly hope this is easier to understand than to write. If there is a limit to how much data can be processed, it is not due to the methodology per se but rather the current state of an organization's processing technology. If there were a computer system powerful enough, all of the events that might occur on a transit system or highway could be assigned to a context such as X. An event-to-context relevancy algorithm such as the Crosswave Differential (a specific type of "algorelevancy") could then measure the extent to which data is relevant to its asserted context. Symbols can naturally "fall out of context" depending on the exact nature and effectiveness of the algorithm.


The simulation compares apples to apples as one might expect in a comparison of performance between similar types of employees. But employees are not necessarily all that similar. For instance, there might be skilled trades-people along with administrative staff. The fact that we try to reach unified approaches from diverse human and capital resources might lead us to question why performance should be limited to people: why not be really eclectic and mix dissimilar productive elements? Through the TIME methodology, it is possible to incorporate all sorts of data into the analysis. Not only different types of work can be taken into account but also workplace conditions and settings. I might for instance combine employees, operating systems, hours of work, office lighting, and ventilation levels. A truly large amount of exceptionally diverse data can be included. The methodology involves relatively little overhead if one chooses to be open to the possibilities. Since the possibilities are limitless, I suppose that even a little overhead can overtax computer resources eventually. I recall myself wondering whether a head of cabbage can perform better than a CEO whose company is responsible for widespread environmental destruction. I'm not naming names, British Petroleum. I honestly don't know the guy's name, come to think of it. Using the TIME methodology, we can actually determine whether a chair or clock contributes more to performance than a CEO, depending of course on the construction of relevancy.


If an employee named Katie contributes strongly when she is not using a particular computer, this is useful information from a management standpoint. Performance evaluation can therefore be used to improve the workplace rather than simply labeling some workers as superior and others as unsatisfactory. It is possible to examine specific performance settings that contribute to particular types of sales. Some workers might do really well selling certain products from home while others might function better in a traditional office environment. The transpositional strategy becomes important as one reaches out to include different types of data. For instance, if monthly sales reports are available for a particular industry, it takes some thought to incorporate data from these reports into the broader analysis. The problem isn't the availability of data but rather its transpositional application to different organizational contexts. Nor is this an issue of prediction, but rather it poses a challenge of phenomenal description. A report is nothing if not its contextual relevance.


Environment for Algorelevancy

TIME is a type of data-rich environment. The TIME methodology represents a way of forming this environment so that the data resources can be accessed in particular ways. Algorelevancies are used to measure the "distance" between elements or objects of data within the environment albeit not in a spatial sense. Algorelevance is a transpositional concern. Some readers might want to read my blog on the Geography of Data, which I also describe as Transpositional Geography. I currently have one algorelevancy that I use routinely and another that is under development. The relevancy of data to performance can be characterized either as stress or shock, which are terms to describe the nature rather than direction of relevance. Neither stress nor shock is meant to imply something bad. Instead, stress is something that tends to build up and be persistent. For example, Kurt Lewin's Force Field analysis to me describes a stress situation where the interaction of opposing forces lead to equilibrium conditions along a conceptual edge. Shock involves a sudden or acute reaction. I have been mostly focused on the stress response, which is probably the most prevalent in business settings. Shock might be more relevant in terms of the response of financial markets to shifting global conditions; acts of terrorism and rioting; fear and panic over fuel and food shortages; and maximizing the impact of military campaigns. Conditions of shock can occur infrequently and rapidly, making it difficult to examine data in relation to stress.


My father was a maintenance mechanic. I have a fancy citation issued to him by a former employer that says, "For devotion to duty during typhoon 'Welming'." My very young father at the time, rather than let the facility fall during the typhoon, stayed to secure the building. I suppose he helped to save the plant and protect the livelihoods of many people. Such an act of heroism has tangible capital value that is difficult to measure over the course of day-to-day operations. The context of performance might not reveal the contribution of members in every potential setting especially shocking conditions. Certainly only the most superficial management regimes would choose to perceive performance in purely instrumental terms and in a nominal sense. I consider the XD poorly suited for situations of shock. But I firmly believe that sometimes, companies sink or swim based on the uncommon valour of its employees. When organizations have similar processes, systems, and workers, even the smallest differences under the rarest conditions might affect the order of things. So shock analysis might be important strategically, and I hope to contribute a bit to this discussion in the future.


The conceptual framework described in this blog is all about the stress response making use of the Crosswave Differential algorelevancy. If anybody is interested in shock, my tentative name for the shock equivalent is the Shockwave Differential algorelevancy. When I use the term "algorelevancy," I am usually referring to the transpositional approach. When I use the term differential, I mean the actual algorithmic measurement. However, when I use Crosswave Differential, I might mean either the approach or the measurement. I therefore invented the term algorelevancy to prevent confusion and to be more specific. I know this is all probably just a blur except for those passionate or perhaps obsessed about partitioning knowledge. My main objective is just to distinguish between the data environment and the algorithms used to make sense of the data. I would cringe if I heard a comment like, "The TIME methodology just doesn't work." The methodology only creates the setting for the algorithms. I firmly believe in the value of contextualized data. It is perhaps the best way to go if a person hopes to make effective use of extremely large amounts of data. The TIME methodology enables contextualization. But the assertion of transpositional relevance is a separate concern.


Transpositional Inductive Management Environment (TIME)


The TIME methodology is a way of organizing data. But before this organization takes place, there must be data to organize. Before data collection occurs, there has to be an understanding of what data to gather. A reference library is full of data; but a person still has to know what data should be gathered. Without this knowledge, it might not be apparent even after a person finds the data. Over the course of this search, it is possible to accumulate important events leading to different types of decisions. In the context of these decisions, the relevancy of the data might is affected. The determination of relevancy (therefore also the construction of algorelevancies) is a management concern. Managers might question the relevance of decisions in relation to profits. These days being exposed to so much data, it can be temping to ignore how the setting of data can influence decision-making. What data might seem promising at first could later prove pointless. What data seems irrelevant at the outset might later be found instrumental. Therefore, in attempting to manage data resources directly, there is great likelihood of exhaustive effort accompanied by high risk of squandered resources. The TIME methodology reduces the burden of managers by allowing them to concentrate on the contexts in which data might be relevant.


"Transpositional" relates to the management of contexts in which events occur. The impact of data on performance or any other context is determined through relevancy design (the transpositional construction of relevance) and how data should be placed within this structure. I have found it difficult to describe the algorithmic process; and perhaps the code is rather complicated. But essentially, it is necessary to determine the placement and properties of elements of relevance through contextual construction. Although I don't want to complicate matters, algorelevancy describes not just an algorithm but an algorithmic process. Obviously this function is best performed by a computer. However, contextual construction is performed by managers. Algorelevancies must operate within the parameters of contextual constructs made possible using the TIME methodology. Although to me the idea of contextual relevance is fairly tangible, it might be a foreign concept to some readers; this could be because data is sometimes gathered in a state that lacks any kind of contextual connection. Using the TIME methodology, data never exists without a context. Data is never disembodied. Moreover, any piece of data can be associated with an unlimited number of contexts. I therefore distinguish between the data itself and the organizational contexts that support assertions of relevance for any data collected. The methodology allows for active managerial involvement without necessarily requiring their intervention in daily processes. The methodology can accommodate massive amounts of data, thereby bringing the scope of management more in line with what the new technology has to offer.

Views: 360

Tags: academic, adaptation, algorelevance, algorelevancies, algorelevancy, algorithmic, analysis, data, environment, information, More…management, methodologies, methods, modelling, models, restructuring, shock, standardization, stress, studies, theoretical, theories, universality

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service