Dataproc is a low-cost, Google Cloud Platform integrated, easy to use managed Spark and Hadoop service that can be leveraged for batch processing, streaming, and machine learning use cases.
BigQuery is an enterprise grade data warehouse that enables high-performance SQL queries using the processing power of Google’s infrastructure.
In this blog, we will review ETL data pipeline in StreamSets Transformer, a Spark ETL engine, to ingest real-world data from Fire Department of New York (FDNY) stored in Google Cloud Storage (GCS), transform it, and store the curated data in Google BigQuery.
Once the transformed data is made available in Google BigQuery, it will be used in AutoML to train a machine learning model to predict the average incident response time for the FDNY.
The dataset is made available through the NYC Open Data website. The 2009-2018 historical dataset contains average response times of the FDNY. The data is partitioned by incident type (False Alarm, Medical Emergency, and so on), borough, and the number of incidents during a particular month.
Here’s what the sample FDNY data looks like:
Data Source And Dataset
Before running the Spark ETL pipeline in StreamSets Transformer, you can preview the pipeline against the configured Dataproc cluster to examine the data structure, data types, and verify the transformations at every stage. This is also a great way to debug data pipelines. For more information on pipeline preview, refer to the documentation.
Using a Filter processor we will filter out incidents where INCIDENTCLASSIFICATION == “All Fire/Emergency Incidents“ or INCIDENTBOROUGH == “Citywide”.
Because this is a historical dataset and we’re using it to train a machine learning model, we need to remove information that would not be known at the beginning of the month. In this case, that is INCIDENTCOUNT. To remove this field from every record, we’ll use a Field Remover processor.
Labels or target variables in machine learning models are of numeric data type. In this case, the field value of AVERAGERESPONSETIME is transformed in the following steps:
Running the StreamSets Transformer data pipeline displays various metrics in real-time. For example, batch processing time taken by each stage as shown below. This is a great way to start looking into fine tuning the processing and transformations.
Once the pipeline runs successfully, the Google BigQuery table is auto-created, if it doesn’t already exists, and the transformed data is inserted into the table. This dataset is then readily available for querying as shown below.
The transformed data stored can then be imported directly from the BigQuery table for training a machine learning model in AutoML.
Using AutoML you can build on Google’s machine learning capabilities and create custom machine learning models.
That’s it! We went from loading raw, real-world data into Google BigQuery to creating a machine learning model in AutoML without any coding or scripting!
It goes without saying that training models, evaluating them, model versioning, and serving different versions of the model are non-trivial undertakings and that is not the focus of this post. That said, however, StreamSets Transformer makes it really easy to load data into Google BigQuery and AutoML.
Checkout these helpful resources and get started quickly with running your Spark ETL data pipelines.
Learn more about StreamSets For Google Cloud Platform.