Home » Technical Topics » Machine Learning

AI Robotization with InterSystems IRIS Data Platform

9258901261

Fixing the terminology

A robot is not expected to be either huge or humanoid, or even material (in disagreement with Wikipedia, although the latter softens the initial definition in one paragraph and admits virtual form of a robot). A robot is an automation, from an algorithmic viewpoint, an automation for autonomous (algorithmic) execution of concrete tasks. A light detector that triggers street lights at night is a robot. An email software separating e-mails into €œexternal€ and €œinternal€ is also a robot.

Artificial intelligence (in an applied and narrow sense, Wikipedia interpreting it differently again) is algorithms for extracting dependencies from data. It will not execute any tasks on its own, for that one would need to implement it as concrete analytic processes (input data, plus models, plus output data, plus process control). The analytic process acting as an €œartificial intelligence carrier€ can be launched by a human or by a robot. It can be stopped by either of the two as well. And managed by any of them too.

Interaction with the environment

Artificial intelligence needs data that is suitable for analysis. When an analyst starts developing an analytic process, the data for the model is prepared by the analyst himself. Usually, he builds a dataset that has enough volume and features to be used for model training and testing. Once the accuracy (and in less frequent cases, the €œlocal stability€ in time) of the obtained result becomes satisfactory, a typical analyst considers his work done. Is he right? In the reality, the work is only half-done. It remains to secure an €œuninterrupted and efficient running€ of the analytic process €“ and that is where our analyst may experience difficulties.

The tools used for developing artificial intelligence and machine learning mechanisms, except for some most simple cases, are not suitable for efficient interaction with external environment. For example, we can (for a short period of time) use Python to read and transform sensor data from a production process. But Python will not be the right tool for overall monitoring of the situation and switching control among several production processes, scaling corresponding computation resources up and down, analyzing and treating all types of €œexceptions€ (e.g., non-availability of a data source, infrastructure failure, user interaction issues, etc.). To do that we will need a data management and integration platform. And the more loaded, the more variative will be our analytic process, the higher will be set the bar of our expectations from the platform€™s integration and €œDBMS€ components. An analyst that is bred on scripting languages and traditional development environments to build models (including utilities like €œnotebooks€) will be facing the near impossibility to secure his analytical process an efficient productive implementation.

Adaptability and adaptiveness

Environment changeability manifests itself in different ways. In some cases, will change the essence and nature of the things managed by artificial intelligence (e.g., entry by an enterprise into new business areas, requirements imposed by national and international regulators, evolution of customer preferences relevant for the enterprise, etc.). In the other cases €“ the information signature of the data coming from external environment will become different (e.g., new equipment with new sensors, more performant data transmission channels, availability of new data €œlabeling€ technologies, etc.).

Can an analytic process €œreinvent itself€ as the external environment structure changes? Let us simplify the question: how easy is it to adjust the analytic process if the external environment structure changes? Based on our experience, the answer that follows is plain and sad: in most known implementations (not by us!) it will be required to at least rewrite the analytic process, and most probably rewrite the AI it contains. Well, end-to-end rewriting may not be the final verdict, but doing the programing to add something that reflects the new reality or changing the €œmodeling part€ may indeed be needed. And that could mean a prohibitive overhead €“ especially if environment changes are frequent.

Agency: the limit of autonomy?

The reader may have noticed already that we proceed in the direction of a more and more complex reality proposed to artificial intelligence. While taking a note of possible €œinstrument-side consequences€. In a hope for our being finally able to provide a response to emerging challenges.

We are now approaching the necessity to equip an analytic process with the level of autonomy such that it can cope with not just changeability of the environment, but also with the uncertainty of its state. No reference to a quantum nature of the environment is intended here (we will discuss it in one of our further publications), we simply consider the probability for an analytic process to encounter the expected state at the expected moment in the expected €œvolume€. For example: the process €œthought€ that it would manage to complete a model training run before the arrival of new data to apply the model to, but €œfailed€ to complete it (e.g., for several objective reasons, the training sample contained more records than usually). Another example: the labeling team has added a batch of new press in the process, a vectorization model has been trained using that new material, while the neural network is still using the previous vectorization and is treating as €œnoise€ some extremely relevant information. Our experience shows that overcoming such situations requires splitting what previously used to be a single analytic process in several autonomous components and creating for each of the resulting agent processes its « buffered projection » of the environment. Let us call this action (goodbye, Wikipedia) agenting of an analytical process. And let us call agency the quality acquired by an analytical process (or rather to a system of analytical processes) due to agenting.

A task for the robot

At this point, we will try to come up with a task that would need a robotized AI with all the qualities mentioned above. It will not take as a long journey to get to ideas, especially because of a wealth of some very interesting cases and solutions for those cases published in the Internet €“ we will simply re-use one of such cases/solutions (to obtain both the task and the solution formulation). The scenario we have chosen is about classification of postings (€œtweets€) in the Twitter social network, based on their sentiment. To train the models, we have rather large samples of €œlabeled€ tweets (i.e. with sentiment specified), while classification will be performed on €œunlabeled€ tweets (i.e. without sentiment specified):

No alt text provided for this image

Figure 1 Sentiment-based text classification (sentiment analysis) task formulation

An approach to creating mathematical models able to learn from labeled texts and classify unlabeled texts with unknown sentiment, is presented in a great example published on the Web.

The data for our scenario has been kindly made available from the Web.

With all the above at hands, we could be starting to €œassemble a robot€ €“ however, we prefer complicating the classical task by adding a condition: both labeled and unlabeled data are fed to the analytical process as standard-size files as the process €œconsumes€ the already fed files. Therefore, our robot will need to begin operating on minimal volumes of training data and continually improve classification accuracy by repeating model training on gradually growing data volumes.

To InterSystems workshop

We will demonstrate, taking the scenario just formulated as an example, that InterSystems IRIS and ML Toolkit, a set of extensions, can robotize artificial intelligence. And achieve an efficient interaction with the external environment for the analytic processes we create, while keeping them adaptable, adaptive and agent (the «three А»).

Let us begin with agency. We deploy four business processes in the platform:

No alt text provided for this image

Figure 2 Configuration of an agent-based system of business processes with a component for interaction with Python

  • GENERATOR €“ as previously generated files get consumed by the other processes, generates new files with input data (labeled €“ positive and negative tweets €“ as well as unlabeled tweets)
  • BUFFER €“ as already buffered records are consumed by the other processes, reads new records from the files created by GENERATOR and deletes the files after having read records from them
  • ANALYZER €“ consumes records from the unlabeled buffer and applies to them the trained RNN (recurrent neural network), transfers the €œapplied€ records with respective €œprobability to be positive€ values added to them, to the monitoring buffer; consumes records from labeled (positive and negative) buffers and trains the neural network based on them
  • MONITOR €“ consumes records processed and transferred to its buffer by ANALYZER, evaluates the classification error metrics demonstrated by the neural network after the last training, and triggers new training by ANALYZER

Our agent-based system of processes can be illustrated as follows:

No alt text provided for this image

Figure 3 Data flows in the agent-based system

All the processes in our system are functioning independently one from another but are listening to each other€™s signals. For example, a signal for GENERATOR process to start creating a new file with records is the deletion of the previous file by BUFFER process.

Now let us look at adaptiveness. The adaptiveness of the analytic process in our example is implemented via €œencapsulation€ of the AI as a component that is independent from the logic of the carrier process and whose main functions €“ training and prediction €“ are isolated one from another:

No alt text provided for this image

Figure 4 Isolation of the AI€™s main functions in an analytic process €“ training and prediction using mathematical models

Since the above-quoted fragment of ANALYZER process is a part of the €œendless loop€ (that is triggered at the process startup and is functioning till the whole agent-based system is shut down), and since the AI functions are executed concurrently, the process is capable of adapting the use of AI to the situation: training models if the need arises, predicting based on the available version of trained models, otherwise. The need to train the models is signaled by the adaptive MONITOR process that functions independently from ANALYZER process and applies its criteria to estimate the accuracy of the models trained by ANALYZER:

No alt text provided for this image

Figure 5 Recognition of the model type and application of the respective accuracy metrics by MONITOR process

We continue with adaptability. An analytic process in InterSystems IRIS is a business process that has a graphical or XML representation in a form of a sequence of steps. The steps in their turn can be sequences of other steps, loops, condition checks and other process controls. The steps can execute code or transmit information (can be code as well) for treatment by other processes and external systems.

If there is a necessity to change an analytical process, we have a possibility to do that in either the graphical editor or in the IDE. Changing the analytical process in the graphical editor allows adapting process logic without programing:

No alt text provided for this image

Figure 6 ANALYZER process in the graphical editor with the menu open for adding process controls

Finally, it is interaction with the environment. In our case, the most important element of the environment is the mathematical toolset Python. For interaction with Python and R, the corresponding functional extensions were developed €“ Python Gateway and R Gateway. Enabling of a comfortable interaction with a concrete toolset is their key functionality. We could already see the component for interaction with Python in the configuration of our agent-based system. We have demonstrated that business processes that contain AI implemented using Python language, can interact with Python.

ANALYZER process, for instance, carries the model training and prediction functions implemented in InterSystems IRIS using Python language, like it is shown below:

No alt text provided for this image

Figure 7 Model training function implemented in ANALYZER process in InterSystems IRIS using Python

Each of the steps in this process is responsible for a specific interaction with Python: a transfer of input data from InterSystems IRIS context to Python context, a transfer of code for execution to Python, a return of output data from Python context to InterSystems IRIS context.

The most used type of interactions in our example is the transfer of code for execution in Python:

No alt text provided for this image

Figure 8 Python code deployed in ANALYZER process in InterSystems IRIS is sent for execution to Python

In some interactions there is a return of output data from Python context to InterSystems IRIS context:

No alt text provided for this image

Figure 9 Visual trace of ANALYZER process session with a preview of the output returned by Python in one of the process steps

Launching the robot

Launching the robot right here in this article? Why not, here is the recording from our webinar in which (besides other interesting AI stories relevant for robotization!) the example discussed in our article was demoed. The webinar time being always limited, unfortunately, and we still prefer showcasing our work as illustratively though briefly as possible €“ and we are therefore sharing below a more complete overview of the outputs produced (7 training runs, including the initial training, instead of just 3 in the webinar):

No alt text provided for this image

Figure 10 Robot reaching a steady AUC above 0.8 on prediction

These results are in line with our intuitive expectations: as the training dataset gets filled with €œlabeled€ positive and negative tweets, the accuracy of our classification model improves (this is proven by the gradual increase of the AUC values shown on prediction).

What conclusions can we make at the end of the article:

€¢ InterSystems IRIS is a powerful platform for robotization of the processes involving artificial intelligence

€¢ Artificial intelligence can be implemented in both the external environment (e.g., Python or R with their modules containing ready-to-use algorithms) and in InterSystems IRIS platform (using native function libraries or by writing algorithms in Python and R languages). InterSystems IRIS secures interaction with external AI toolsets allowing to combine their capabilities with its native functionality

€¢ InterSystems IRIS robotizes AI by applying €œthree A€: adaptable, adaptive and agent business processes (or else, analytic processes)

€¢ InterSystems IRIS operates external AI (Python, R) via kits of specialized interactions: transfer/return of data, transfer of code for execution, etc. One analytic process can interact with several mathematical toolsets

€¢ InterSystems IRIS consolidates on a single platform input and output modeling data, maintains historization and versioning of calculations

€¢ Thanks to InterSystems IRIS, artificial intelligence can be both used as specialized analytic mechanisms, or built in OLTP and integration solutions

For those who have read this article and got interested by the capabilities of InterSystems IRIS as a platform for developing and deploying machine learning and artificial intelligence mechanisms, we propose a further discussion of the potential scenarios that are relevant to your company, and a collaborative definition of the next steps.

Leave a Reply

Your email address will not be published. Required fields are marked *