Visualization has become a key application of data science in the telecommunications industry.

Specifically, telecommunication analysis is highly dependent on the use of geospatial data. This is because telecommunication networks in themselves are geographically dispersed, and analysis of such dispersions can yield valuable insights regarding network structures, consumer demand and availability.

To illustrate this point, a k-means clustering algorithm is used…

Added by Michael Grogan on February 19, 2019 at 3:44am — No Comments

Image recognition and classification is a rapidly growing field in the area of machine learning. In particular, object recognition is a key feature of image classification, and the commercial implications of this are vast.

For instance, image classifiers will increasingly be used to:

- Replace passwords with facial recognition
- Allow autonomous vehicles to detect obstructions
- Identify geographical features from satellite imagery

These…

Added by Michael Grogan on February 17, 2019 at 11:00am — No Comments

The purpose of a *variance-covariance matrix* is to illustrate the variance of a particular variable (diagonals) while covariance illustrates the covariances between the exhaustive combinations of variables.

A variance-covariance matrix is particularly useful when it comes to analysing the volatility between elements of a group of data. For instance, a variance-covariance matrix has particular applications when it comes to…

ContinueAdded by Michael Grogan on June 30, 2018 at 4:30am — No Comments

The **numpy**, **scipy**, and **statsmodels** libraries are frequently used when it comes to generating regression output. While these libraries are frequently used in regression analysis, it is often the case that a user might choose different libraries depending on the data in question, among other considerations. Here, we will go through how to use each of the above to generate regression output.

Added by Michael Grogan on August 26, 2017 at 6:30am — No Comments

Here is how we can use the **maps**, **mapdata** and **ggplot2** libraries to create maps in R.

In this particular example, we’re going to create a world map showing the points of Beijing and Shanghai, both cities in China. For this particular map, we will be displaying the Northern Hemisphere from Europe to Asia.

require(maps)

require(mapdata)

library(ggplot2)

library(ggrepel)

*cities =…*

Added by Michael Grogan on August 22, 2017 at 4:00am — 1 Comment

Functions are used to simplify a series of calculations.

For instance, let us suppose that there exists an array of numbers which we wish to add to another variable. Instead of carrying out separate calculations for each number in the array, it would be much easier to simply create a function that does this for us automatically.

**A function in R generally works by:**

(a) Defining the variables to include in the function and the calculation. e.g. to add two…

ContinueAdded by Michael Grogan on August 12, 2017 at 5:30am — No Comments

PostgreSQL is a commonly used database language for creating and managing large amounts of data effectively.

Here, you will see how to:

1) create a PostgreSQL database using the Linux terminal

2) connect the PostgreSQL database to R using the “RpostgreSQL” library

In this example, we are going to create a simple database containing a table of dates, cities, and average temperature in degrees (Celsius).

We will name…

ContinueAdded by Michael Grogan on August 7, 2017 at 7:30am — No Comments

One of the big issues when it comes to working with data in any context is the issue of **data cleaning and merging of datasets**, since it is often the case that you will find yourself having to collate data across multiple files, and will need to rely on R to carry out functions that you would normally carry out using commands like **VLOOKUP** in Excel.

The tips I give below for data manipulation in R are not exhaustive - there are a myriad of ways in which…

ContinueAdded by Michael Grogan on July 10, 2017 at 6:00pm — 1 Comment

The below is an example of how **sklearn** in Python can be used to develop a k-means clustering algorithm.

The purpose of k-means clustering is to be able to partition observations in a dataset into a specific number of clusters in order to aid in analysis of the data. From this perspective, it has particular value from a data visualisation perspective.

This post explains how to:

*Import kmeans and PCA through the sklearn…*

Added by Michael Grogan on June 17, 2017 at 8:00am — 9 Comments

- Visualizing New York City WiFi Access with K-Means Clustering
- Image Recognition with Keras: Convolutional Neural Networks
- Variance-Covariance Matrix: Stock Price Analysis in R
- Linear regression in Python: Use of numpy, scipy, and statsmodels
- Creating maps in R using ggplot2 and maps libraries
- Creating functions in R
- Create PostgreSQL Database In Linux And Connect To R

- Python: Implementing a k-means algorithm with sklearn
- Create PostgreSQL Database In Linux And Connect To R
- Visualizing New York City WiFi Access with K-Means Clustering
- Data Cleaning and Wrangling With R
- Creating maps in R using ggplot2 and maps libraries
- Linear regression in Python: Use of numpy, scipy, and statsmodels
- Variance-Covariance Matrix: Stock Price Analysis in R

- data (2)
- learning (2)
- machine (2)
- python (2)
- r (2)
- science (2)
- Keras (1)
- OpenCV (1)
- R (1)
- RPostgreSQL (1)
- analysis (1)
- classification (1)
- clustering (1)
- functions (1)
- ggplot2 (1)
- ggrepel (1)
- image (1)
- internet (1)
- k-means (1)
- kmeans (1)
- linux (1)
- mapdata (1)
- maps (1)
- numpy (1)
- pca (1)
- pylab (1)
- regression (1)
- rstats (1)
- scipy (1)
- sklearn (1)
- statsmodels (1)
- telecommunications (1)
- unsupervised (1)
- video (1)

© 2019 Data Science Central ® Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Classification and Regression In a Weekend - With Python
- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions