Subscribe to DSC Newsletter

Applied Ontology and the Drivers of Data Recognition

I shared my story in a few blogs about returning to university to do a graduate degree.  In my first class, I found myself being asked to define “ontology.”  It was a course on the Geography of Disability.  I returned to class the following week with some details.  I said that strangely enough, this is not a word that can be found in all of my dictionaries.  One dictionary listed “oncology,” which I believe is the study of cancerous tumours.  My Collins Cobuild dictionary says, “Ontology is the branch of philosophy that deals with the nature of existence.”  Since I felt I would be asked to elaborate, I secretly visited Wikipedia for some background information.  Months later, a lecturer at a social science course - for which I was one of several teaching assistants - said that ontology is also the study of how things gain relevance.  Things come to exist by being recognized or becoming relevant.  My “feel” for the meaning came only after I started in my present job: others asked to discuss the criteria I used in the metrics I collected.  The criteria became formal and externalized - defining all sorts of situations and behaviours relating to production.  In effect, rules were created governing the creation of data.

 

In terms of ontology, I remain at the “command post.”  Others stand ready to make requests to refine or redefine the criteria.  The reason for this is actually practical more than a matter of authority.  I constantly develop metrics.  I need clear and specific criteria to get anything done.  I do whatever seems most reasonable at the time.  I am always responsive to potential changes later.  I am sensitive to the fact that the underlying criteria used to generate data have a number of strong drivers or influences, some of which I will be discussing in this blog: 1) production requirements; 2) my lived experience and sense of fairness as an employee; 3) my sense of social justice and ethics; 4) social construction and business practices; and 5) the needs of the data.  It helps for a person like me to have objectives that are aligned with management at least from a business standpoint since there are many aspects of self-policing operating behind the scenes.  The biggest way to impair what I do is to question if that self-policing is leading to the desired outcomes.  In many respects, people are paying for the self-policing.  If people want to externally scrutinize, evaluate, and make determinations at every step, they are actually saying that they want to do the job themselves; this results in duplication, which apart from making poor use of resources leads to great delays.

 

A few months after being hired for my job, I was asked to start reviewing the work that people do mostly for errors.  I was told that I would eventually be asked to provide some guidance on how to evaluate the performance of my coworkers.  My supervisor admitted that she didn’t have a good approach; and she openly stated that she wasn't strong with numbers. Although a graduate certificate in human resources management gave me some understanding of how to score employees, it seemed to me that that any kind of development required a concerted use of electronic data.  The paper records used by my predecessor hardly seemed appropriate.  I redesigned and my processes and made them paperless with the objective of generating data as a “waste product.”  (I didn’t need the data at that instant, but I felt I would need it later.)  I started keeping data for the entire operation and also the individual agents.  Rather than looking at just the errors of their work, I began recording the behaviours that seemed likely to give rise to errors.  Production requirements have a great influence on data recognition.

 

I admit that I took my job not as a career move but mostly to pay off my student debts.  So for the first few months, I mostly did the work that I would eventually be checking.  The “details of the work” itself is a bit confidential.  My important point however is that I cannot help but judge the criteria I create through lens of my lived experience.  It is unfortunate for everyone that I usually judged myself by a high standard.  On a positive note, I understood the meaning of a reasonable and sustainable workload.  When I had to determine what behaviours seemed likely to lead to problems, I drew from my personal experience.  In fact, rather than return “codes” used in their scores, I returned “comments” associated with the codes.  I reasoned that I would eventually have a code for each potential problem imaginable.

 

I think that many people don’t realize how important social justice and ethics are in the development of data criteria.  In fact for me, the need for justice drives criteria development.  If one employee who shows up a few hours late each day seems to have better metrics than another person who always shows up on time, the organization might be encouraging people to sleep a few hours longer for the sake of improved metrics.  It is not my place to encourage workers to go to work on time.  But I can make their absence more apparent using the systems available.  In fact, I keep track of attendance.  They don’t have to punch a timecard.  I don’t have to leave my desk to check theirs.  I use the data system to provide a profile of their attendance.  Not only this, but some workers at work might not actually be working.  I developed several different ways of adding context to the phenomenon of presenteeism - using the data system.  The fact is, if one person is surfing the net all day while another employee is working hard, I cannot let that dedicated worker suffer from a poorly designed metrics regime.  It is just plain wrong.

 

Management science has a history of being big on micro-management - that is to say, managing every small detail of the work that a person does.  It “seems” scientific.  There are all sorts of preconceptions that define the meaning of productivity.  It is important not to allow the “metrics of conformance” to override the “metrics of performance.”  Micro-management tends to be about ensuring behavioural conformity.  “Am I an instrument of propagating a popular preconception?” I routinely ask myself.  It is a question that becomes particularly important if competition emerges and tests whether the policies and practices of the past genuinely matter.  The metrics regime can cause workers to operate in certain limited parameters or boundaries that might interfere with market penetration.  Micro-management should not lead to strategic mismanagement or misalignment.  So I consider myself a frontline agent ensuring that the metrics don’t needlessly encroach on personal autonomy and control.  The people above me might not even realize the amount of care and thought that goes behind the construction of data.

 

The final concern I want to discuss relates to the needs of the data.  After all is said and done, I need to move from a point of data absence to data presence.  This means making use of what I have available.  “Making use” means using the tools at my disposal in the manner they were meant to be used - which for me means working with the limitations at hand.  These limitations to some extent determine what data is recognized and how.  For instance, a relational database imposes a certain mindset, which I have described in other blogs as “pigeon-holing.”  The pigeon-holing creates inertia that impedes the free and natural expression of phenomena.  I am more likely to program an algorithm in Java than in R because reality doesn’t necessarily manifest itself as vectors.  The extent to which a person can adapt their tools to bring about the recognition of phenomena as data will, on a day-to-day level, influence the depth and relevance of analysis.

I realize that my points seem far detached from the "study of existence" I mentioned earlier.  I studied the management records of one organization that operated for decades.  I want to assure readers that after 30 years or so, the people in those records exist mostly as numbers.  The reports might persist as anthropological artifacts or relics, but the day-to-day interaction with clients will usually exist in the numerical quotidian.  Similarly, the activities occurring in organizations today might resurface on the dusty backup drives of tomorrow as relational tables and unstructured data files.  We might leave not just information about clients but also how we have chosen to deal with them; how employees are aligned and coordinated to enable the objectives of the organization; the beliefs and practices giving rise to our ontological constructs and frameworks.  I would like to think that someday, a researcher will say that I gave the matter some careful consideration, that I cared enough to raise the issues - in this setting notorious for alienating clients and workers.

Views: 199

Tags: alienation, analysis, analytics, characterization, effectiveness, efficacy, efficiency, flagging, indicators, justice, More…ontologist, ontology, performance, philosophy, representation, social, theories, working

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service