During my childhood, our school librarian said that I was invited to attend a conference of writers. I felt honoured and privileged. I asked what the writers intended to ask me. She smiled and said that actually I would be asking the writers questions. Not quite sure why I would ask these people anything and why their thoughts would matter, I nodded anyways and at some point attended the most boring event imaginable for a young child. I thought I had died, I really did. I sat there listening – not allowed to move – really having no interest. It horrifies me to this day although I admit not having a completely reliable recollection of my childhood. Suffice it to say, I have never been faster than a speeding bullet or able to leap tall buildings. I am no superman. Ironically, some people might suggest that I provide data enabling how others are judged – to help the organization locate and identify the superman. In this blog I will be discussing the concept of selection in the face of randomness. Purely by chance, a person like me could have turned out completely differently. I know that selection processes and preconceptions of performance have worked both for and against me. It is important to distinguish between the quasi-intellectualism of the business world and meaningful insights that can be obtained from data.
I wrote a simulation – being able to program in several computer languages – perhaps unlike those writers from my childhood that haunt me to this day – where a sales agent is randomly chosen by a client to do a sale. In the simulation, such choices are designed to occur many times a day. However, the number of overall sales is affected by a saw-tooth wave pattern. Notice the zigzag pattern above. Since there are 1000 cycles or sales days, I found the resulting image fairly noisy. I filtered the data using the 10-DMA (the 10-day moving average) to obtain the cleaner image shown below. Hopefully the saw-tooth is apparent at this point.
I present all of the cycles merely to show that the sales are being manipulated or controlled in this case by me. Predestiny is at play. I suggest that 1000 sales days is a long time. I therefore present 20 days below. This image more closely approximates what might be found in any work environment containing a number of agents. In the case of predestiny influenced by randomness, in all likelihood, all of the agents would at some point perform about the same. Apparent distortions in performance are happenstance – temporary aberrations destined to be corrected. An examination of this third image would not indicate predestiny: some agents seem to be much better performers than others. In order to give rise to the superman, quasi-intellectualism demands that an organization place its bets on the best performers. So there would be an elaborate process of selection to separate superior from inferior employees. Agent #3 in cyan seems to be the worst performer of the bunch. Agent #2 in yellow consistently outperforms agent #3.
However, a longer-term perspective shows the following: if the cumulative sales totals are monitored for each agent, generally speaking the patterns closely coincide – as shown below. There is little difference in overall performance. If an agent is replaced, there wouldn’t be a great difference in performance except of course during the training period. There would certainly be a higher cost in terms of administration to shed and later replace staff. I am all but certain that if two similar companies had two different management practices, the first of these would tend to outperform: 1) the first company keeping its production staff in place and focusing its attention on business problems; and 2) the second company constantly changing and retraining its staff – directing its efforts on selection – not really engaging its business challenges.
As further confirmation that performance is about even, I provide the pie chart below for the distribution of cumulative sales after 1000 cycles. Presumably after more sales days, distribution would become even more even. I rarely get to use pie charts these days.
Controlled distribution (or random distribution as a type of control) matters when . . . yes, distribution is controlled. I call this the queue model – although there might not be a physical queue. Randomness provides evenness. But the absence of evenness in a non-random distribution does not negate the underlying dynamics: there is a limit on the extent to which performance can be attributed or assigned to the specific worker. People waiting for a bank teller or cashier are unlikely to seek out a specific individual. Therefore it would be illogical to evaluate performance in these situations based on the number of clients served. Occupations where the method of service delivery is similar to a queue – in that the worker has little influence over the workflow and how the client is handled – include the following: driving a bus; processing agents with scheduled deliverables; building cleaners with preset duties and times; most assembly-line workers; any job that can at some point, at least conceptually, be programmed into a machine.
To the extent that the client can make a choice afterwards – e.g. whether or not to buy a product based purely on the abilities of the agent – then the agent might be held responsible. This “sounds” fine, but in practice such choices seems unlikely to be connected solely to the abilities of agents. Indeed, it seems to me, a number of organizations go out of their way to make agents “interchangeable” like parts in a machine to allow for quick replacement. Interchangeability is an important aspect of the queue model, allowing customers to exhibit indifference. Without indifference, the whole concept of a queue would be undermined. I therefore do not select a bus to ride that depends on the person driving the bus. I do not select a particular cashier if there are many waiting to take payment. Below, I list some general ideas to take away from the simulation.
In the event of a single agent: if there is only one agent, his or her numbers reflect individual capacity rather than demand. It would therefore be appropriate to discuss individual capacity – but not the influence of that agent over demand. If there is only 1 cashier servicing 30 customers in a line-up, demand for services exceeds personal capacity. To suggest that demand has declined is illogical.
If there are just a few agents: if there are some but not enough agents to handle the entire load in good time, their numbers generally reflect the capacity of the organization. Therefore using their combined numbers to assess demand for a product will likely provide poor or uneven guidance.
If there are many agents: in this case, the aggregate numbers will tend to reflect demand rather than capacity since there is overcapacity. This situation to me seems most likely to result in good guidance about demand. At the same time, I appreciate that overcapacity tends to be considered wasteful.
If repetitive replacement doesn’t lead to tangible improvement: If a person occupying a particular role or position has been replaced often without any changes to overall performance, this is a good sign that the problem should not be blamed on the agent. The individual metrics used to ascertain performance might not be helpful.
Not so much from the simulation but more from a general human resources standpoint, I suspect that superman seeks out super-pay. If the compensation system is not designed to recognize differences in performance, it would be illogical to expect to expect those differences to emerge nonetheless – as if superman were super-dumb. Super-pay goes against the idea of replaceable staff serving a queue environment. If anything, the queue or conveyor belt is designed to ensure the lowest possible pay precisely by limiting the nature of what an employee can contribute – thereby limiting the need for compensation.
We therefore need to consider how changes in capacity reflect the relevance of assertions of demand. When an analysts compiles sales data and attempts of determine the trend, this might not be of product demand per se but rather limitations in capacity. When some effort is made to improve sales performance by turning over staff, it should be noted that the metrics are unlikely to reflect performance per se but rather the capacity to accommodate the flow – almost like a worker at a conveyor belt. It is possible to monitor a worker’s ability to accommodate a flow, which of course is worthwhile information; just keep in mind that if this person leaves, he or she might exhibit an extraordinary ability to perform at a company that maintains overcapacity. “Performance” is not really at issue in an environment of under-capacity; and the metrics should not be misrepresented as such.
The selection of superman is largely irrelevant in work environments that place a premium on replaceable staff. Organizations are instead seeking “handling capacity.” The conceptual difference between performance and handling relates to the extent to which a worker is able to exercise control or personal discretion over choices. If there is a high level of regulation limiting personal choice, this worker is responsible for compliance much more than performance. As part of this compliance regime, it is necessary to take whatever clients emerge from the queue and to handle their needs in the prescribed manner – almost like a machine. Superman need not exist in such an organization – for he would have little opportunity to make use of his exceptional abilities. Still, there remains this nagging question of how to optimize organizational performance given the high likelihood of under-capacity in many organizations. I believe that most companies already know the answer: invest in technology. I add the following: understand the technology; use the technology cautiously; and make a place for superman incase he or she happens to be disguised. Focus on making places not on making replacements.