Home » Technical Topics » Machine Learning

Data Animation: Much Easier than you Think!

Data Animation: Much Easier than you Think!

In this article, I explain how to easily turn data into videos. Data animations add value to your presentations. They also constitute a useful tool in exploratory data analysis. Of course, as in any data visualization, carefully choice of what to put in your video, and how to format it, is critical. The technique described here barely requires any programming skills. You may be able to produce your first video in less than one hour of work, and even with no coding at all. I also believe that data camps and machine learning courses should include this topic in their curriculum.

Examples include convergence of a 2D series related to the Riemann Hypothesis, and fractal supervised classification.

Preparing a video

To produce a data video, there are three steps:

  • Step 1: Prepare a standard data set (for instance, summary data in Excel, or raw data in text format) with one extra column indicating the frame number. If your video has 20 frames, that column indicates the frame number: an integer between 1 and 20.
  • Step 2: Create the frames. In our example, it consists of 20 images, typically in PNG format, and named (say) input001.png, input002.png and so on. The number of frames can be as large as a few thousands or a small as 10. The production of the PNG images is typically automated.
  • Step 3: Turn your images into a video, say an mp4 file. Some people think that this is the most difficult part, but actually it is the easiest one. It can be done with a few clicks, even without writing a single line of code, using free online platforms designed for that purpose. Here I illustrate how to do it in R with just two lines of code.

If your plan is to create a small presentation with 10 frames, a video may not be the best medium. You can still do it with a video by choosing a long duration (say 5 seconds) for each frame. However, a slide presentation may be a better alternative. My videos typically contain between 100 and 1,000 frames, with a frequency anywhere from 4 to 12 frames per second.

Rules of Thumb

The size of your images should be large enough, say 1,080 x 720. A rectangle is better than a square. Aim for high resolution images, such as PNG images. The PNG format automatically performs loss-less compression. If using a different format, choose loss-less compression if possible. This has to do with the way standard tools display videos. If this is not possible, a video may not be your best option. However, one of the examples provided here consists of 600 x 600 images, and the rendering is still pretty good.

To decide on the number of images per second (the frame rate), try different values to find the best one for your needs. A typical scenario is this: successive images exhibit very little changes, even though the change between the first and last frame is huge. In that case, use many images (say 500), with 10 images per second. Your video will last 50 seconds, and the size of the mp4 may be less than 1 MB. With 20 images per second, it will look like a movie. Hollywood uses 24 frames per second. Dogs need a higher frequency due to peculiarities of their vision. I assume your video is for human beings.

Once your video is produced, upload it on YouTube. It can also be imbedded in articles and webpages such as here on Data Science Central.

Below, I provide details and illustrations about how to produce a data video. If you want to avoid any kind of coding, however easy and limited, you should explore online platforms such as Clideo. These platforms allow you to turn your images into a video, with just a few clicks. Google “PNG to MP4 converter” to find more of these platforms, and the reviews that they get.

Technical Implementation

Whatever programming language you use, you can do a Google search (say “Python video library”) to find the video libraries that are available. First, I describe how two lines of R code can produce a video:

png_files <- sprintf("c:/Users/vince/tex/img_%03d.png", 0:124)
av::av_encode_video(png_files, 'c:/Users/vince/tex/imgPB.mp4', framerate = 6)

This piece of code shows elements that are common to video creation, regardless of the programming language:

  • The PNG images (created in a separate step, possibly using a different tool or language), are stored on my laptop, in a directory c:/Users/vince/tex/.
  • The names of the PNG files are img_000.png, img_001.png, all the way to img_124.png. Each of these files is to become a frame in the video, the first one being img_000.png, the last one being img_124.png. The %03d string indicates that the images are sequentially numbered using 3 digits, The command sprintf loads the full names of these 125 images, in the variable png_files.
  • You need a video library to create the video. In R, the preferred choice is the AV library. To install it, use the command install.packages('av'). Once installed, you may have to include library('av') at the beginning of the source code; on my system, I didn’t have to do it. AV is based on the FFmpeg libraries written in C. In Python, you can use the PyAV library. Indeed, it is the exact same AV library based on FFmpeg.
  • The call to av::av_encode_video (one of the functions in the AV library) produces the video by (1) fetching the PNG files whose names are stored in the variable png_files and (2) processing them. Here the output is the file imgPB.mp4 created in the directory c:/Users/vince/tex/. Note the argument framerate = 6, specifying the number of frames per second. My video lasts 125 / 6 = 20.8 seconds.

Now, assuming you have a list of PNG files, you can create your video!

Creation of the PNG Files

I use various mechanisms to produce my images. I describe one of them here, in R. The full source code is also on my GitHub repository, here. This repository provides links to my YouTube videos, and auxiliary data sets to produce the images. I use the Cairo library (when coding in R) to produce better images by avoiding aliasing. I describe it in my previous article, here. If you use ggplot2 to produce graphics, you don’t need this trick. If you use the old plot command in R, the Cairo library (also accessible from Python) does a nice job improving the rendering with just two lines of code.

Example 1

The video below represents the successive approximations of the Dirichlet eta function using partial sums of its standard mathematical series, for a specific value of the complex (two-dimensional) argument. The details are unimportant unless you are interested in number theory. It explains why the famous Riemann Hypothesis has so far resisted all proofs, despite a one million dollar price tag attached to it. What is important though, is the message conveyed by the video. Especially for machine learning professionals familiar with gradient descent and similar algorithms. Here the iterations start chaotically, then become smooth in a chaotic way. The orbit circles around “sinks” much of the time. It jumps from sink to sink. Towards the end, it spends more and more time around sinks. Then a final sink, called the “black hole”, absorbs it. It has finally converged!

Initially, my interest was in detecting the number of clusters generated by 2D stochastic processes similar to Brownian motion paths, but with infinite variance. Unlike Brownian motions or random walks, they exhibit strong cluster structures, with well separated clusters. The related video below exhibits a deterministic, semi-chaotic (anything but random) sequence. But you can see a nice cluster distribution; I call them “sinks”. My goal was to use machine learning techniques to identify these sinks, and count them, for various similar orbits. This is beyond the scope of this article. I discuss it in details (including the cause of the sinks) in my upcoming book, here.

Riemann zeta function
Riemann zeta function, randomized

R Code (Full Version)

The input file av_demo_vg2.txt is a comma-separated text file that I produced with Excel (yep.) It is available on my GitHub repository, here. The call to the CairoPNG function (requiring the Cairo library) produces the 500 PNG files (the frames) each with 600 x 600 pixels. Each row in the input data set consists of

  • the index k of a vector,
  • the coordinates x, y of the vector in question,
  • the coordinates x2, y2 of the next vector to be displayed,
  • the index col of that vector (used in the randomized version).

The input file has 20 x 500 = 10,000 rows. The R program joins (x, y) to (x2, y2) via the arrows function; each frame adds 20 consecutive undirected arrows to the previous frame. I chose the colors using the rgb parameter in the arrows function.

library('Cairo');

CairoPNG(filename = "c:/Users/vince/tex/av_demo%03d.png", width = 600, height = 600);  
data<-read.table("c:/Users/vince/tex/av_demo_vg2b.txt",header=TRUE);

k<-data$k;
x<-data$x;   
y<-data$y;  
x2<-data$x2;   
y2<-data$y2; 
col<-data$col; 

for (n in 1:500) {
  plot(x,y,pch=20,cex=0,col=rgb(0,0,0),xlab="",ylab="",axes=FALSE  );
  rect(-10, -20, 50, 50, density = NULL, angle = 45,
     col = rgb(0,0,0), border = NULL);
  a<-x[k <= n*20];
  b<-y[k <= n*20];
  a2<-x2[k <= n*20];
  b2<-y2[k <= n*20];
  c<-col[k <= n*20];
  arrows(a, b, a2, b2, length = 0, angle = 10, code = 2,
    col=rgb(  0.9*abs(sin(0.00200*col)),0.6*abs(sin(0.00150*col)),
    abs(sin(0.00300*col))  ));
}
dev.off();

png_files <- sprintf("c:/Users/vince/tex/av_demo%03d.png", 1:500)
av::av_encode_video(png_files, 'c:/Users/vince/tex/av_demo2b.mp4', framerate = 12)

Remark: I used cosine functions for the RGB (red/green/blue) colors, with small integer multiples of a base period. These cosine waves, called harmonics in signal processing, make the colors harmonious.

Example 2

This video illustrates an algorithm performing fractal supervised clustering in 2D. The small training set has 4 groups. The system initially grows, then enters and remains in equilibrium. I am working on another video featuring the birth and death of clusters, with dominant clusters successfully eating away the other ones regardless of size. Or adding/removing points to the training set, over time.

In this video, cluster mixing is taking place near fuzzy but stable boundaries. I use this video to study the convergence of these systems, depending on the initial conditions. The source code is in my GitHub repository, here. I describe the underlying process in my upcoming book (see here); the “PB” abbreviation attached to the video filename stands for “Poisson-binomial point process”, also called perturbed lattice process.

Fractal supervised clustering
Fractal supervised clustering, faster

Adding axes, text or sound will be the topic of a future article.

About the Author

Vincent Granville is a machine learning scientist, author and publisher. He was the co-founder of Data Science Central (acquired by TechTarget) and most recently, founder of Machine Learning Recipes.