In this multi-part series, we will explore how to get started with tensorflow. This tensorflow tutorial will lay a solid foundation to this popular tool that everyone seems to be talking about. The second part is a tensorflow tutorial on getting started, installing and building a small use case.
This post is the second part of the multi-part series on a complete tensorflow tutorial –
If you have tensorflow already installed, you can just skip to the next section.
Different operating systems have different means to install tensorflow. You can go through the documentation for more details. I will just be discussing what is essential to get started.
Try to follow the most basic structures with the best practices already mentioned in the documentation.
Installing Tensorflow with GPU requires you to have NVIDIA GPU. AMD video cards are not supported with tensorflow. NVIDIA uses low level GPU computing system called CUDA. It is an NVIDIA proprietary software.
One can go the OpenCL way with AMD but as of now it won’t work with tensorflow.
Also, all NVIDIA devices are not supported. Here is a list from the NVIDIA documentation listing the supported GPUs.
There are three environments you can leverage to setup tensorflow –
pip
is the preferred way of direct installationvirtualenv
is like a parallel python installation apart from the default python that is already installed in our system. Installing libraries in a virtual environment keeps the libraries separate and you will never have a clash of compatibility with the other libs that are installed directly. If anything goes wrong, you can just fire up a new virtual environment and start afreshThe 2.7 version from Python 2 and 3.3 or later from Python 3 are supported. This holds true for all operating systems.
As of now, Windows only supports version 3.5. Python 2 with windows is a combination that is not supported
Once tensorflow is installed, irrespective of the operating system, environment or the the python version, you should run the following script to verify tensorflow is up and running.
# import TensorFlow
import tensorflow as tf
sess = tf.Session()
# Verify we can print a string
hello = tf.constant("Hello UNP from TensorFlow")
print(sess.run(hello))
# Perform some simple math
a = tf.constant(20)
b = tf.constant(22)
print('a + b = {0}'.format(sess.run(a + b)))
Once this code runs and prints the output successfully, congratulations! You have successfully installed tensorflow. Let’s move to the next section to build our first application
The following are the three types of tensors we need to learn before getting started.
Type | Description |
Constant | Constant values |
Variable | Values adjusted in the graph |
Placeholder | Used to pass data in the graph |
Before diving into the hands on, I just want to introduce a few more terms in the tensorflow terminologies along with their meanings.
The above diagram should help in understanding. Below we have the different data types in supported by Tensorflow.
Note: Quantitized values [qint8, qint16 and quint8] are special values for tensorflow that help reduce the size of the data. In fact, Google has gone to the extent of introducing Tensorflow Processing Units (TPUs) to speed up computation by leveraging quantitized values
We will quickly generate some data to get started. In this case, we will generate house size to predict house prices. The goal here isn’t to build a sophisticated house price predictor, but to get things moving in tensorflow in the simplest possible way.
We will generate some data using the below python code –
import tensorflow as tf import numpy as np import math import matplotlib.pyplot as plt # generation some house sizes between 1000 and 3500 (typical sq ft of house) num_house = 160 np.random.seed(42) house_size = np.random.randint(low=1000, high=3500, size=num_house ) # Generate house prices from house size with a random noise added. np.random.seed(42) house_price = house_size * 100.0 + np.random.randint(low=20000, high=70000, size=num_house) # Plot generated house and size plt.plot(house_size, house_price, "bx") # bx = blue x plt.ylabel("Price") plt.xlabel("Size") plt.show()
This generates the below output [This is a plot of the generated data]
Next, we are going to normalize the data. This helps in bringing the data in the same scale which in turn can lead to faster convergence.
We also split it into train and test data as part of the data science best practices. We will train our model on the training data and test our model on the test data to see how accurate our predictions are.
# you need to normalize values to prevent under/overflows.
def normalize(array):
return (array - array.mean()) / array.std()
# define number of training samples, 0.7 = 70%.
# We can take the first 70% since the values are randomized
num_train_samples = math.floor(num_house * 0.7)
# define training data
train_house_size = np.asarray(house_size[:num_train_samples])
train_price = np.asanyarray(house_price[:num_train_samples:])
train_house_size_norm = normalize(train_house_size)
train_price_norm = normalize(train_price)
# define test data
test_house_size = np.array(house_size[num_train_samples:])
test_house_price = np.array(house_price[num_train_samples:])
test_house_size_norm = normalize(test_house_size)
test_house_price_norm = normalize(test_house_price)
I hope this sets the expectations for what is about to come. In the next post we will build our first TensorFlow model.
In the next part, we will finally be ready to train our first tensorflow model on house prices. It will give us our first hands on experience with tensorflow!
For original blog, click here
Comment
@Atif - yes absolutely!!!
Can we use scikit-learn's "train_test_split" functionality instead to split the data?
Posted 17 June 2021
© 2021 TechTarget, Inc. Powered by
Badges | Report an Issue | Privacy Policy | Terms of Service
Most Popular Content on DSC
To not miss this type of content in the future, subscribe to our newsletter.
Other popular resources
Archives: 2008-2014 | 2015-2016 | 2017-2019 | Book 1 | Book 2 | More
Most popular articles
You need to be a member of Data Science Central to add comments!
Join Data Science Central