Google Cloud for Data Engineers

The Google Cloud for Data Engineers training course teaches students the fundamentals of Google Cloud Platform (GCP) for building and running data pipelines that process batch or streaming data.

This course starts by learning about the GCP services most frequently used by data engineers. Next, students will advance their existing skills in SQL, Hadoop, and Python by understanding how to reuse existing applications to take advantage of managed MySQL and Hadoop/Spark infrastructure on GCP.

Most of the course focuses on differentiating capabilities of GCP for data engineering. Students will learn how to process, analyze, and store petabytes of batch and streaming data with serverless capabilities like PubSub, BigQuery, and Dataflow. For example, students will work with Apache Beam code that enables going beyond the limitations of the original MapReduce framework. Students will also be introduced to the machine learning capabilities of GCP that data engineers can start using without having prior data science experience.

The course will also provide architectural overviews of data processing pipelines enabled by GCP and how you choose the right GCP services for your project.

Course Summary

Purpose: 
Learn to build systems on Google Cloud to store and process batch or streaming data.
Audience: 
Developers and developer teams looking to use streaming data on GCP.
Skill Level: 
Learning Style: 

Hands-on training is customized, instructor-led training with an in-depth presentation of a technology and its concepts, featuring such topics as Java, OOAD, and Open Source.

Hands On help

Seminars are highly-focused, lecture-heavy, half-day to multi-day learning events. Seminars are a great way to create an awareness level of knowledge for a large number of concepts, in a short period of time. Think wide (breadth) and thin (depth).

Seminar help

Workshops are instructor-led lab-intensives focused on the practical application of technologies through the facilitation of a project-related lab. Workshops are just the opposite of Seminars. They deliver the highest level of knowledge transfer of any format. Think wide (breadth) and deep (depth).

Workshop help
Duration: 
3 Days
Productivity Objectives: 
  • Describe the capabilities of Google Cloud for data engineering
  • Build and run data processing pipelines on GCP to ingest, analyze, and store data
  • Identify how to use managed Google Cloud infrastructure for MySQL and Hadoop/Spark
  • Discuss when and how to use PubSub, DataFlow, and BigQuery for serverless data pipelines
  • Integrate data pipelines with other GCP services
  • Identify what criteria to use for the design of data processing pipelines on GCP

What You'll Learn

In the Google Cloud for Data Engineers training course you’ll learn:

  • Google Cloud Basics
    • Why Google Cloud
    • Managed Virtual Infrastructure vs. Serverless
    • Google Cloud for Data Engineers
  • Compute Engine
    • Virtualized Infrastructure
    • Persistent vs. Transient Storage
    • Computer Engine User Interface
    • Pre-emptible Instances
  • Cloud Storage (GCS)
    • Object Storage and Buckets
  • Integration with GCP
  • Web-based and Command Line Interfaces
  • Cloud SQL
    • Provisioning Managed Database Infrastructure
    • Configuration of MySQL on GCP
    • Batch Data Import/Export with Cloud SQL
    • Web-based Interface
    • Integration of MySQL with GCP Services and Applications
  • Cloud Pub/Sub
    • Distributed Messaging Basics
    • Publish/Subscribe Messaging Model
    • Topics and Subscriptions for Messaging
    • Command Line and Python Interfaces
  • Datastore
    • Object-Relational Impedance Mismatch
    • Datastore for Transactional Data
    • Java APIs for Datastore
  • Machine Learning APIs
    • Colaboratory
  • Vision, Natural Language, Translate APIs
  • AutoML Vision
  • Dataproc
    • MapReduce Framework
    • Provisioning Managed Apache Hadoop/Spark/YARN Infrastructure
    • Customizing Apache Bigtop Distribution
    • Pre-Emptible Instances for MapReduce
    • Dataproc User and Command Line Interfaces
    • Map vs. FlatMap for MapReduce
    • Running Apache Hive, Apache Pig, and PySpark
  • Running and Monitoring MapReduce Jobs
    • Storage Migration from HDFS to GCS
  • DataFlow
    • Apache Beam Framework
    • Batch and Streaming Data Processing Pipelines
    • Run Apache Beam in Cloud Shell
    • Apache Beam Combine vs. GroupBy
    • Submitting Apache Beam Pipelines
    • Running Batch and Streaming Dataflow Jobs
    • Apache Beam Pipelines with Side-Inputs
    • Autoscaling Streaming Apache Beam Jobs
    • Apache Beam Windows and Triggers
    • Web-based and Command Line Interface
    • Monitoring Dataflow Jobs
  • BigQuery
    • Serverless data warehousing
    • Columnar vs. Row-based Storage
    • Normalization vs. Denormalization with Columnar Storage
    • Projects, Datasets, Tables
    • Batch Data Import/Export
    • Partitions and Performance Optimizations
  • Data Engineering with GCP
    • Architectures for sample batch and streaming pipelines
    • GCP Storage Optimal Access Patterns
    • GCP Storage Service Selection Decision Model
    • Cost Estimation

Get Custom Training Quote

We'll work with you to design a custom Google Cloud for Data Engineers training program that meets your specific needs. A 100% guaranteed plan that works for you, your team, and your budget.

Learn More

Chat with one of our Program Managers from our Boulder, Colorado office to discuss various training options.

DevelopIntelligence has been in the technical/software development learning and training industry for nearly 20 years. We’ve provided learning solutions to more than 48,000 engineers, across 220 organizations worldwide.

About Develop Intelligence
Di Clients
Need help finding the right learning solution?   Call us: 877-629-5631