Home » Uncategorized

Building interpretable forecasting and nowcasting models: An overview to DeepXF

9802077085

Hello, friends. In this blog post, we will quickly peek through the package “Deep-XF” that is useful for forecasting, nowcasting, uni/multivariate time-series data analysis, filtering noise from time-series signals, comparing two input ts signals, etc. The USP of this package is its bunch of add-on utility helper functions, and the model explainability module that can be used for explaining model results, be it the forecasting/nowcasting problem.

Overview:-
 
DeepXF is an open source, low-code python library for forecasting and nowcasting problems. DeepXF helps in designing complex forecasting and nowcasting models with built-in utility for time series data. It also provides facility for comparing two time-series input signals for similarity based on Siamese Neural Networks; and one can also easily filter noise from time-series signals with few lines of code at ease.
 
One can automatically build interpretable deep forecasting and nowcasting models at ease with this  simpleeasy to use and low-code   solution. It enables users to perform end-to-end Proof-Of-Concept (POC) quickly and efficiently. One can build forecasting models based on deep neural networks such as Recurrent Neural Network (RNN), Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU), Bidirectional RNN/LSTM/GRU (BiRNN/BiLSTM/BiGRU), Spiking Neural Network (SNN), Graph Neural Network (GNN), Transformers, Generative Adversarial Network (GAN), Convolutional Neural Network (CNN), and others. It also provides facilities to build a nowcast model using Dynamic Factor Model based on Expectation-Maximization algorithm.

Deep-XF package includes:-

  • Exploratory Data Analysis with facilities like profiling, filtering outliers, univariate/multivariate plots, plotly interactive plots, rolling window plots, detecting peaks, etc.
  • Data Preprocessing for Time-series data with services like finding missing, imputing missing, date-time extraction, single timestamp generation, removing unwanted features, etc.
  • Descriptive statistics for the provided time-series data, normality evaluation, etc.
  • Feature engineering with services like generating time lags, date-time features, one-hot encoding, date-time cyclic features, etc.
  • Finding similarity between homogeneous time-series signal inputs with Siamese Neural Networks.
  • Denoising (Filtering noise) from time-series input signals.
  • Building Deep Forecasting Model with hyperparameters tuning and leveraging available computational resources (CPU/GPU).
  • Forecasting model performance evaluation with several key metrics
  • Game theory based methods to interpret forecasting model results.
  • Building Nowcasting model with Expectation–maximization algorithm
  • Explainable Nowcasting
Author Info:
Ajay Arunachalam is an AWS Certified Machine Learning & Cloud Architect Specialist. Presently, he is working as a Senior Data Scientist & Researcher (Artificial Intelligence) at Centre for Applied Autonomous Sensor Systems (AASS), Orebro University, Sweden. Prior to that, he was working as a Data Scientist at True Corporation, a Communications Conglomerate, working with Petabytes of data, building & deploying deep models in production. He truly believes that Opacity in AI systems is the need of the hour, before we fully accept the power of AI. With this in mind, he has always strived to democratize AI, and be more inclined towards building Interpretable Models. His interest is in Applied Artificial Intelligence, Machine Learning, Deep Learning, Deep Reinforcement Learning, and Natural Language Processing, specifically learning good representations. From his experience working on real-world problems, he fully acknowledges that finding good representations is the key in designing the system that can solve interesting challenging real-world problems, that go beyond human-level intelligence, and ultimately explain complicated data for us that we don’t understand. In order to achieve this, he envisions learning algorithms that can learn feature representations from both unlabelled and labelled data, be guided with and/or without human interaction, and that are on different levels of abstractions in order to bridge the gap between low-level data and high-level abstract concepts.

https://sites.google.com/site/ajayarunachalamprofile/

References: