Subscribe to DSC Newsletter

Systemic Intelligence - Prelude to a Universal Data Model

Many years ago, I attended a vocational college to learn skilled trade. I was taught about the behaviour of systems. I learned that after renovations to a house, the furnace might cycle on and off more frequently; this can leave some parts of the house too cold. A wood-burning stove or fireplace should be treated as a part of a system. Open doors and windows in the dwelling can cause exhaust from such appliances to enter living spaces. I realize that these particular examples of systems might evade those without an HVAC background. Well, long before I went to college for retraining, I was working in the back-office of a financial company. In those days, printers often used ink-ribbon cartridges. In one workplace, there was a particularly large printer shared by several departments; this device required frequent cartridge replacement. One day, somebody decided to save money by purchasing recycled cartridges. These cheaper substitutes often caused the printer to freeze up. Consequently, this minor cost-cut that seemed reasonable on surface contributed to extensive delays. Many departments were dependent on the printer for reports that had to be generated each day. I want to crystallize the nature of the problem: 1) on one hand, there was financial benefit from reduced expenditure on cartridges; and 2) on the other hand, there were systemic losses greatly exceeding those benefits. In the Push Principle that I will be discussing shortly, I introduce an approach involving key data events to influence an entire system rather than its individual parts. The implications of such an approach are far-reaching: it highlights the importance of creatively and strategically handling data in order to target critical areas of concern affecting the entire body. This is because specific actions can translate into impacts that ripple across physiological, social, and production ecologies.

Why would anybody want to deal with problems in relation broader systems? The main reason is to achieve the greatest amount of benefit to the entire system using what scarce resources are available. Attempting to deal with a single problem or a small aspect of a large picture can have undesirable consequences to other parts of the system. It is financially and logistically justifiable to examine ways of determining the systemic impacts of both problems and possible remedial actions. Apart from the general rationale, I point out the usefulness of systems in relation to software development. To some extent, systems contain their own internal logic or intelligence. I could for instance attempt to understand factory operations by studying all of the existing behaviours occurring within a facility in isolation. This tells me what people are doing. It doesn't show what they are supposed to be doing or might do differently; so I cannot ascertain the difference between normative and organizational behaviours. When people and capital are involved in processes in certain ways to achieve particular ends, this "organization" exists to achieve productive outcomes: this is an issue of functionality instigated by the objective. But "how" people go about achieving productive outcomes is socially constructed: this is an issue of instrumentality initiated by the proponents. I am suggesting that a systemic approach has the potential to trigger intelligence that neither I nor anybody else can anticipate in advance; therefore we cannot fully incorporate such intelligent design into processes of production. The attempt to initiate or dictate development is inherently non-systemic; the intelligence is external to the system and therefore quite difficult to package such that it is adequate to the system. This blog post is about moving towards internally-guided rather than externally-driven intelligence.

Data Events and the Slider Object

I need to review some older blog material before proceeding to discuss how I personally would approach the issue of internal intelligence. A few months ago, I introduced a special object called a slider as shown on the image below. A slider represents a conceptual distribution of events over a dividing line called an edge. Expressed simply, events appearing on the left of the slider seem negatively associated with particular phenomena; items on the right positively; and then there are the events surrounding the edge that seem prone to philosophical speculation. A slider helps the user by implementing "descriptors": these are details that accompany the data explaining the nature of the data, which can then be used for sorting purposes and to create distribution profiles. In the image to follow, I use the descriptor "exercise" to sort events. Many exercise events appearing to the right of the slider might share certain features; these commonalities can point to new directions to further one's knowledge of the phenomena. I have a research prototype called Tendril that can generate sliders and the other data objects that are mentioned in this blog.

I feel that a slider provides a worthwhile perspective if one hopes to determine how to meaningfully expand the body of events. Sliding offers guidance if a person lacks knowledge of the underlying phenomena. Making use of such an object is actually a rather philosophical stance in relation to data. "I don't know very much about this at all. It's not that I need a starting point. I don't even know what a starting point is in relation to this data." An alternate philosophical stance asserts an understanding of phenomena a priori. Although information is absent, thus justifying its further collection, ironically we might already have a perception of how things generally fit together. This perspective might be valid if, despite our incomplete knowledge of the dynamics, we have a fairly coherent understanding of ourselves in relation to our surroundings. Furthering this perspective, I will now discuss something that I call a "risk object." In the case of risk, arguably one already knows what risk is. A person basically hopes to ascertain where an event falls within his or her perception of risk.

Popping and the Risk Object

Popping is a style of dance. It's not a brand of breakfast cereal or an ingredient. In popping, the body follows a series of smooth motions and abrupt shifts, juxtaposed perhaps to imitate a robot snapping and whirling around swivel points. I personally call this a type of exercise. I'm no expert in kinesiology. However, I would say that popping is probably good for the circulation; it is perhaps a bit stressful on the joints. For me in terms of data collection, popping represents a type of reoccurring event that affects many different types of contexts. If I extract the slider details for an event such as popping over different contexts, I become aware of its systemic impacts. I do not necessarily know the impacts of popping in advance. I do not wish merely to satisfy my preconception. But I do know what an "impact" is in relation to me; and I want to know if popping has an impact. (If I do not know what an impact is, then I cannot say if popping has one.) The question is no longer whether popping is "good" or "bad." Popping is both good and bad depending on the context, intensity, duration, and other details. Dealing with the event on a systemic level is a matter of risk management involving complex choices: there is 1) an absence of complete knowledge; 2) some perception of contextual benefits and drawbacks. It is necessary to navigate or negotiate a path in the face of these conflicting and at times competing realities. I present the risk object for popping below based on real personal data.

I pop to industrial music and electronica, a style of music that arguably induces popping. To dance like a robot, it is necessary to play the preferred music for robots. Perhaps the human body can do robot-like moves that no robot would be able to imitate; this is especially true once the beat starts to fall apart. I am left wondering why we bother mentioning the "robot" at all except perhaps to simplify the explanation. Popping requires a fair amount of endurance and willingness to move not-like-a-human - at least not all of the time. For me, popping represents a kind of a Post-Romantic alienated expressionism performed by the human body. A person could imitate a car crash (a collision). This is a little bit beyond ordinary popping of course. I imagine that a forklift is easier to imitate than a car crash. Due to my age, I'm unsure how long I can keep up my popping. I might die popping. I consider popping a rather enjoyable activity. However, I chose to focus on it specifically because it can be reasonably expected to have both positive and negative physical impacts; and this is precisely what the contextual results seem to show on the following illustration. Just as a brief explanation for those unacquainted with my sliding terminology, when popping appears under "under" and "less," this means that it is adverse in relation to the context; under "more" and "over," this means that popping is positive.

Popping for me registers as "under" under the contexts "pain" and "joints." Joints involve things such as fingers, knees, ankles, and wrists. I usually place an "x" at the front of a context to show that it is a lagging context: e.g. xjoints. A lag means there is some sort of time delay between the event and the contextual impact. Popping seems to be associated with adverse perceptions in relation to the stomach, feet, and frame (bone context). I actually don't feel much discomfort when I pop (at the actual time of popping) except periodically at the knees and feet. If I do quite a lot of popping, I sometimes get cramps around the kidney area (pain context); again, the body is forced to move in unnatural ways. I normally update the database later in the evening often many hours after events take place. Perhaps most of the other contexts are fairly self-explanatory. My point here, as the illustration above should show, is that popping or any other type of event can have both positive and negative consequences. It is therefore illogical to deal with events that occur within a system in isolation as if the other parts of the system don't exist or matter. It is particularly hazardous to attempt to gain benefits from certain events while failing to take into account the adverse impacts apparently associated with those benefits. Conversely, in order to achieve certain outcomes, a person might attempt to exploit events that have multiple impacts. In relation to me, since I only have data on myself, taking a balanced omega formulation seems to have a number of beneficial impacts and not many negative consequences.

Protosystem Intermediary

The placement or relevance of an event in relation to a particular context is determined by an algorithm. I describe this algorithm as the "relevancy." There might be different types, but the one that I use throughout this blog is called the "crosswave differential." Recall that a "system" is usually made up of different contexts. My research prototype goes through all of the contexts assigned to a particular system to calculate the relevance of individual events. Similarly, a relevancy algorithm can be used to determine the relevance of a context to a system; of a system to a supersystem and so forth. However, in terms of the immediate matter at hand, there are often many different systems full of contexts containing a myriad of multifarious events. I usually generate an intermediate object or protoform called a "protosystem" that contains all of the event results for all of the contexts (resulting in quite a large data object). Different systems would then extract from the protosystem. Forming an intermediate object is slower than simply generating the results for a single system; but in the end it prevents repetitive processing. For example, if one intends to make systemic comparisons, invoking many different systems perhaps constructed fractally or thematically, it would be time consuming to recalculate relevancy values for the same event-to-context combinations. Making use of an intermediary also allows me to let my computer run for long periods of time without supervision. I usually have dinner, watch television, and get some exercise during the lengthy process. I wasn't able to conceptualize an image for the protosystem object to share; so I simply pasted a portion of the object file below.

Systems and the Grail Object

The risk object is fairly complex from an algorithmic standpoint. It is also a bit bulky: it is necessary to skim through the different contexts and make assessments on this basis. A person might ask, would it be possible - taking into account the risk object for each event - to create a type of high-level "superobject" showing the apparent systemic value for everything? Inherent in this question is the idea that risk management can be automated. The processing environment is given the duty of attaching value to consequences - a job that might normally go to risk managers. I'm uncertain if this question can be easily addressed because the mathematics of confirming systemic effectiveness is perhaps rather subjective. We all have a different sense of how contexts should be prioritized and weighted. Each perspective on the future affects prioritization of the present. For example, I routinely take my blood pressure and check my eye-sight; conceivably, an event might help one but harm the other thereby forcing important choices to be made. (I am using an extreme example. The whole idea is really to select events that help both. But sometimes due to lack of resources and harsh environmental conditions, it might be necessary to consider difficult tradeoffs. So I'm saying that prioritization has been in the hands of people making choice rather than algorithms.) Nonetheless, I believe that a superobject can serve as a useful reference point to assist in decision-making, keeping competing transpositionalism in mind. I call the superobject a "grail." This object is extracted from a protosystem. A grail object is not much more difficult to form than a risk object.

In the illustration below, I used a fairly arbitrary schema to determine the distribution of the grail overthe y-axis, which represents the algorithmic gradient. The relevancy determined the event placement on the y-axis simply by deducting 1 for each incident of "under" (U) and "less" (L); adding 1 for each "more" (M) and "above" (A). The event fields were then set side-by-side from lowest to highest along the x-axis. (I would expect more realistic applications to make use of weighted ULMA values.) Although the data used in the illustration is real, I chose to highlight some interesting events that probably make the data seem a bit fabricated. It would be easier to explain an event position if similar contexts were included in the grail. At the moment, I have no idea why Copper Wristband appears near the middle along with Full Moon; however, there are so few incidents of either, I wouldn't say that the data necessarily means much. The juxtaposition is due to the contextually diverse nature of this particular grail. Perhaps the placement is purely incidental. Yet even incidental placement can be relevant. I would expect "surprises" by nature to emerge where there is the least amount of attention. Perhaps the middle of the grail forms a conceptual edge sometimes occupied by peripheral concerns: it is the most likely place to find neglected, ignored, and overlooked events. So the placement of an event should not be regarded in purely linear terms but rather transpositional.

Probably not obvious in the illustration above is how I deliberately "manipulate" the distribution through intervention. I usually go through many non-successful data events in order to find those that seem reasonably beneficial. I don't necessarily avoid events that appear on the left; but I definitely try to repeat events occupying the right. So we find above a sea of relatively fruitless data events followed by something resembling a pointy mountain. Among events to the left not labelled on the illustration are the following: nightmares, a particular brand of mouthwash, pickles, and hotdogs. (For me, fast food in general appears on the left.) This is not to say that any of these things are "bad" for people since I have no data on other people. I suppose a reasonable question to ask is why I bother keeping such detailed data about myself. Well, the research prototype is not designed to hold data specifically about me; and when I started collecting information about a year ago, it was to determine if the system could handle diverse personal data. In other words, it was an experiment. I never actually meant to continue personal data collection for much longer than a few months. I have since embraced the idea of being a pioneer: so apart from any other future applications, I intend to keep collecting personal data.

Push and the Displacement Object

Since certain events can have multiple negative and positive consequences to an organization, it stands to reason that intervention should to the greatest extent possible attempt to reduce the number of multiple negative events and increase the number of multiple positive events. I describe this as the "Push Principle." This strategy is quite different from maximizing or minimizing particular outcomes in isolation. Push occurs through the strategic use of grail objects to achieve multifaceted outcomes. Change is not induced or prevented through the use of a single force focused on a particular type of event - a "poke." Rather, less force is applied over an array of events - a "push." I am actually describing a management philosophy or perspective on the allocation or distribution resources. However, I also have a data-science-oriented application. Premised on the idea that a certain number of key events may have contributed to systemic decline, I recently tried to determine the likeliest cause of a condition resembling food poisoning - meaning that it might not be food poisoning - which I personally had to face. Am I making this stuff up just so I can blog about it? Not at all, this is merely curious happenstance in the life of person with liberal eating habits. Although it was a terrible experience, I had enormous interest in the resulting data.

So how does one go about pinpointing the culprits to something like food poisoning? I have a methodology to share. We can thank my food poisoning for the development of this methodology since it did not exist prior. I call it "differential shift analysis" (DSA) - a rather literal term. It represents a way of identifying recent events that may have contributed to systemic impacts. It might be described as a type of shock analysis. (By the way, I'm still working on a relevancy algorithm to characterize shock in a non-systemic context.) The methodology for DSA isn't complicated: it is only necessary to "subtract" an older grail object (scenario normal) from its more recent grail counterpart (scenario abnormal); this results in a differential shift profile that I call the "displacement pattern." I set the events side-by-side from lowest to highest in order to arrive at the pattern. Although I am describing a case of apparent food poisoning, I feel that this very same methodology can be applied to other types of events and phenomena. The illustration below shows what I consider to be a "standard" shift profile (the "dispatch sequence"): if the interval between objects is quite short, there should hardly be any change, resulting in a dispatch where most of the events are near the zero. We are then left with the dip and spike associated with the particular shock incident at the outskirts of the sequence.

The illustration above shows that, at time of posting, I maintained about 800 events (across the x-axis). A relatively small number of these events (dipping below 0 on the y-axis) seem related to the systemic decline that I experienced. Moreover, some events are particularly unrelated to the decline; at least, this is one way to interpret the distribution to the right. My memory in relation to the things that I eat and take on a daily basis is rather unreliable. So I was surprised to find that "battered shrimp" seems particularly unrelated my food poisoning. The same can be said of cheese doodles, milkfish, and oil supplements. The placement of some items is the result of "algorithmic drift"; these are events that satisfy the algorithm in a purely mathematical sense but which don't really fit my actual requirements. Among those items that seem most associated with the decline are the following: a type of frozen dessert; a particular processed meat (indicated by the algorithm to be exceptionally likely); some watermelon; cheddar cheese; and a particular brand-name burger. In the list of new things added to the database recently are mustard and a type of rhythmic exercise meant to stretch the abdominal area. So there are a number of suspicious characters in this line-up.

I was inclined to cancel out the burger since the incubation period for pathogenic bacteria would likely have affected me sooner. Also, the production of a "brand-name" burger involves many levels of quality control. The frozen dessert is something I have had a number of times in the past, and the package is nowhere near the expiration date. If I had to guess, although I can't be certain of course, I would say that the symptoms resembling food poisoning were related to the mustard, processed meat, watermelon, or the abdominal exercise; this is complicated by triple the usual amount of popping on the evening of the illness. Using an approach like this, it almost seems like I have abandoned all reasoning. Actually, reasoning is inescapable. I merely search for signals in the data. The next image involves what I would describe as the "recovery period" after the illness. As a preventative measure, I was very interested in distributing events to help explain my situation. During this time, as indicated on the x-axis, the number of events increased significantly. I placed a separate data series on the chart to show the events that contributed positively to systems both before and after. (The location on the gradient for this second series is based on the "after" figures.) Some events contributed positively before but negatively after the illness. There are other types of patterns such as those events that although positive after the illness nonetheless declined. I leave out these variations for the sake of brevity.

Conceptually speaking, only the contributors around the middle of the displacement pattern are consistently worthwhile. That is to say, the placement is near zero because the contribution to the system was comparable both before and after the illness. However, the contributors to the left and right perhaps owe their placement to some extent on the illness itself. For instance, the displacement would not now be higher on the right had the contribution not been lower prior to the illness. Similarly, the displacement would not now be lower on the left had the contribution not been higher prior to the illness. So the many zones of the displacement pattern offer different insights. Although theory might not exist to explain phenomena prior to its detection, one should be inspired to develop theory at some point thereafter: e.g. perhaps certain events - such as those related to the application of particular therapies - are best associated with recovery rather than maintenance or the treatment of something chronic.

I hope the usefulness of the Push Principle is apparent in the displacement pattern. It is possible to identify systemic shifts premised on the disproportionately significant impacts of relatively few events. Further, if one attempts to analyze the descriptors of the events, theoretically it should be possible to determine the nature of events that seem particularly related to systemic shifts. This methodology creates investigative openings. If something being investigated involves a coherent system - e.g. forensics in relation to a transit system - differential displacement might provide objective guidance supported by the data available. The extent to which particular activities leverage on a system, it might be possible to detect those activities using the same system; this is my general idea on the matter anyways. For example, if a system is designed to behave in particular ways in relation to specific events, then the relevance of these same events to those specific behaviours should be evident using differential displacement. However, in a large organization, there are so many different types of events occurring routinely, detection can be quite challenging using a clean statistical approach. Reality exists in the ghost patterns. I believe that tracking requires an approach attuned to massive amounts of data that possibly contain only loose or garbled systemic connections.

Symbiotic Algorithms in Human Development

I hope that smart devices containing adaptive algorithms will one day accompany people throughout their lives, helping them to cope with hostile environments and circumstances. I periodically come across articles and blogs that almost seem to suggest, collecting a lot of data is a fruitless endeavour. I actually think that our survival as a species will require us to overcome major hurdles in the future; and our ability to rapidly make effective use of data will emerge as an increasingly central theme. I'm uncertain if my food poisoning example pointed out the following feature about the research prototype: in order to arrive at a grail object, it is necessary for the processing environment to "automatically" sift through systemic constructs, contexts, and data events. (There are just too many different things to handle manually.) I feel that it should be possible to create pre-packaged processing environments for users. I would like these intelligent systems to eventually change along with the bodies of users, their financial resources, lifestyles, relationships, workplace demands, and surrounding risks. I envision "partners for life" perhaps only periodically augmented and refined by diagnostic supercomputers.

In this blog, I pointed out how a system can be used to examine the impact of events and, conversely, how key events can be identified to achieve systemic outcomes. In the example, I make use of differential displacement to support detection of short-term and infrequent phenomena. While I find "systems" mentioned in the literature in a purely conceptual sense, I believe that the potential of systems remains to be fully realized through the use of algorithmic approaches particularly in relation to large amounts of loosely connected data. Algorithmic methodologies involving systems raise the possibility of intelligence, automation, and adaptation over the course of routine data collection. I discussed Push as a means of leveraging on the system to achieve desirable outcomes; this contrasts with the idea of localized intervention oblivious to systemic consequences. I never really explained the title during the main body of the blog, so it is probably ironic that I should do so now near the end. All of this is merely preamble or transition. In about a month, I will be building on some of the ideas introduced here. In particular, I will be discussing the process of contextual selection in the development of systems. I will also be offering what I consider to be a unique data model premised on the persistent interrelationship between participation and disablement, which I believe to be inherent in many data structures. However, in my post in a couple of weeks, I will be writing about software development focused on data science from an entrepreneurial perspective.

Views: 455

Tags: adaptation, algorithms, approaches, butterfly, chronic, development, effect, environmental, management, methodologies, More…optimization, organizational, prescriptive, principles, rewards, risks, software, strategies, systematic, systems, theories, theory, tradeoff, treatments

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Videos

  • Add Videos
  • View All

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service