image
Cloud SQL Demo

Contents

Introduction to Google Cloud SQL
1
Introduction
PREVIEW1m 58s
2
Cloud SQL Overview
PREVIEW6m 2s
4
Summary
1m 24s
Start course
Difficulty
Beginner
Duration
31m
Students
134
Ratings
5/5
starstarstarstarstar
Description

In this course, you will be introduced to Google Cloud SQL.

Learning Objectives

  • Create new databases using Cloud SQL
  • Migrate external databases to Cloud SQL
  • Manage database users
  • Import and export data
  • Create backups and read replicas

Intended Audience

  • Database administrators
  • Cloud Architects
  • GCP Developers
  • Anyone preparing for a Google Cloud certification

Prerequisites

  • Some experience working with databases
  • Basic understanding of SQL syntax
  • Access to a GCP account
Transcript

So in this lesson, I am going to demonstrate how to use Google Cloud SQL.  I will cover how to create a server instance, as well as how to import an external database.  I will also show you how to create databases, how to import and export data, and how to manage users.  

Please note that this is *not* going to be a SQL tutorial. I am not going to be teaching you how to write queries or work with records.  The focus will be on setting up databases, not on using them. 

As you can see, I have already logged into my GCP account.  The fastest way to access the Cloud SQL page is to type “SQL” into the search bar.  For some weird reason, there is no match for “Cloud SQL”.  So if you try to search for the full name, you won’t find anything.

Now because I haven’t created anything yet, you will see a pretty empty screen.  If I had created a few instances, you would see those listed here.  There are two main options I want to demonstrate.  First, I will show you how to create and configure a new Cloud SQL instance.  And then afterwards, I will show you how to migrate an external database into Cloud SQL.

So to start creating a new instance, you can either click on this link or this link.  Please note that creating an instance is not the same thing as creating a database.  Records are stored in a table.  Tables are stored in a database.  And databases are stored in an instance.  So I need to create an instance first, before I can create a database.

On this screen you will be asked to choose a database engine.  This can be either MySQL, PostgreSQL or Microsoft SQL Server.  Each engine works slightly differently from the others.  So you might need to do a little research in order to pick the best one for your needs.  MySQL is the most popular choice.  It has lots of online documentation and examples.  SQL Server is often favored by Microsoft shops.  But you should be aware that due to licensing fees, SQL Server does cost the most.  And no you cannot use an existing license to avoid paying.

After choosing an engine, you will then have to fill out this form.  It has four main sections.  Let's go through each, one at a time.

So first under Instance Info, you need to provide a name.  This is how you will differentiate this instance from any others.  I will call mine “mysql-demo-1”.  Next you need to choose a password for the default user account.  You automatically get a “root” user account, and this lets you set the associated password.  You have three main options here.  You can have Google generate a secure password for you by clicking on “Generate”.  You can have no password at all, by clicking on this option here.  Generally this is not recommended, especially for production databases that will contain “real” information.  Or yoru third choice is to simply make up your own password and type it in.

You can also create additional user accounts later, if you wish.  This “Password Policy“ section allows you to enforce certain rules.  For example, you can disallow passwords that are too short.  Or you can prevent users from picking something easy to guess.

This next option allows you to pick a database engine version.  Each database type has several versions that Google will support.  Generally, you will want to pick the latest version available.  However, your code might depend on some functionality that is only present in an older version.  So pick the version that makes the most sense for you.

So that covers section 1.

Section 2 allows you to choose between two default configurations: Production and Developer.  The “production” config is best for “real” data.  It offers higher power and reliability, but costs more.  The “developer” config is best for “fake” data.  Mostly you will use these databases for testing or debugging.  If a developer database goes offline or loses some data, it’s not a big deal.  

You can also click on “Configuration Details” to see the exact differences.  Remember that these are just default values.  So don’t worry about picking “production” and being stuck with only 4 virtual CPUs.  You can override any of these values later. 

Ok, once you pick your configuration, then it’s time to select a region and zone.  Your region controls where your data will reside.  Usually you want your data to be as close to your users as possible.  And then you can choose between running in a single zone or running in multiple zones.  Choosing multiple zones will cost more, but it means your databases will still be reachable even when a zone goes down.  You can pick specific zones or let Google pick your zones for you.

This last section lets you make more detailed changes.  There are a lot of options so I am not going to go through them all.  Machine type lets you choose the type and number of CPUs to  assign to your instance.  This will directly affect how many users and operations per second you can handle.  

The storage sections lets you pick your storage type and capacity.  SSD is faster but more expensive.  HDD is slower but cheaper.  You also get to pick a starting amount of disk space.  By default Google will automatically increase your storage space when you run out.  So you can start with a lower number, and increase it later.   Of course, you can disable the automatic increase and just stick to a certain amount.

Connections let you control who can connect to your databases.  You can assign a Public IP so that anyone on the internet can connect.  And you can lock down certain IP ranges with authorized networks.  Or you can assign a private IP so that your instance is only accessible from inside your GCP account.

Backups let you schedule automatic backups of your data.   

Maintenance lets you define a “maintenance window”.  Since Google is going to be responsible for maintaining your Cloud SQL instances, they need to know when they can apply patches and reboot your servers.

Flags let you customize parts of your instance.  And labels can be used to help tell your instances apart, if you have a lot of them. Alright, that should cover all the main options.  Make sure to review the summary of the right, to ensure everything is correct.  Once you have done that, then click on “create Instance”.  

Instance creation can take a while.  I am going to fast forward a bit, until this operation is complete.

Ok, it appears that my new instance has been created.  You can see it takes you to an Overview screen with details about it.  Before taking you through this screen in detail, I want to show you how the main Cloud SQL page has changed.  If I return to the main Cloud SQL page, here you can now see my new instance “mysql-demo-1”.  And if I click on the name, I will return to the overview page.

At the top are a bunch of options for managing my instance.  I can edit my instance settings by clicking on “edit”.  I can change the region, machine type, storage, etc.

I can import and export records using these buttons.  I will provide a demo in just a minute, but let’s go over the rest of the buttons here first.  Restart and stop allow you to shut down your instance and start it up again.  You will typically use this for fixing problems.

Clone allows you to create a duplicate of your instance.  Maybe you have a production database that you want to copy to testing or staging.  Note that you can copy the current data in the database, or you can even copy a previous state from a specified time.

Failover will allow you to manually trigger a failover event.  Basically, this allows you to test that your applications will continue to function whenever a zone goes down. Ok, so those are the main options available on the Overview page.  Right now I have an empty instance with no databases and no tables.  Let me fix that by importing some records.

So I just need to click on the Import button here at the top.  This will read a file in Cloud Storage that is in either SQL or CSV format.  SQL format is just a list of SQL commands.  CSV stands for “comma separated values”.  So SQL files are more powerful.  They can do all sorts of operations.  CSV files just have data in them and are used for adding new records.  In this case I have uploaded a SQL file that will create a database called “disney_plus”.  And inside this database will be a table called “catalog” that holds all the TV shows and movies currently available on the Disney Plus streaming service.

Down here at destination, you can pick the database to store the new files in.  In my case, my SQL file handles all that for me, so I can leave this blank.  If I were importing a CSV file, I would need to tell it where to save all the new records.

Once you have made all your selections, then just click on the “Import” button at the bottom of the page.  It can take a while for import to complete.  Let me skip ahead.

Ok, now that it is complete, we need to verify that it worked.  I can connect to my new database by scrolling down and using the “Connect to this instance” section on the Overview page.  If I am going to connect I need the IP address which is listed here.  I also need the user account, which in this case is “root”.  And I need the password for the account, which is currently set to “password”.  Now there are many different ways to connect.  The easiest way is to use this link that Google has provided here.  When I click on “Open Cloud Shell” it is going to first open Cloud Shell.  I will have to wait a little bit to sign in.  Let me expand this to its own window.  And then it will automatically paste in a command to connect with a gcloud MySQL client.  So all I have to do is hit the return key.  And I need to authorize the call.  And then after a minute, I should be able to run any SQL commands I want in an interactive session.

Ok, I need to enter the password for “root”.  And there we go.  Now I can use this prompt to view my records.  So let me tell it to switch to the “disney_plus” database.  And I can just list out every single record in the table called “catalog”.  It looks like I imported 1450 records.  This format is sort of hard to read, so let me pull up information on one specific movie. Ok here I have pulled up the record for the movie “Inside Out”.  So it looks like the import worked as expected.

Let me exit out of the mysql client.  And then I will exit out of Cloud Shell.  So now that I have a database with some records in it, I can also do an export.  Just click on the export button on the top here.  Again you have to choose a format.  Either SQL or CSV.  Choose which data to export.  You can pick all data or choose specific databases.  And finally, choose a bucket and path to save it to in Cloud Storage.  And there we go.  It is pretty simple.

So those are the main operations available for your Cloud SQL instances.  I’ll quickly take you through the rest now.  You have a graph for monitoring your instance here.  You can toggle between CPU utilization, memory usable and a bunch of other things.  If you scroll down you can find the logs for your instance.  So here you can see when I created the instance.  Here is when I imported the SQL file.  And here is when I did the export.  You can get a list of more details by using these links down here.

This side menu has more info and options as well.  Connections lets you manage who can connect to your instance.  You saw these same settings when I created the instance.  There are additional security options for enforcing SSL and generating client certificates if needed.  Connectivity Tests can be used for debugging any connection issues.  You can use this to determine if your packets are being received.

The users tab allows you to manage your users.  You can create new database user accounts and delete them.  This is pretty straightforward.

Databases shows you the list of databases that exist.  You can see the “disney_plus” database I imported at the top.  The rest of these are default-created MySQL databases.  You also can use this page to create or delete databases.  But you cannot view or manage tables or records.  For that, you need to use a MySQL client and run the appropriate queries.

Backups lets you manage your backups.  You can manually initiate a new backup.  This would be useful to do right before you try something risky.  You can restore a backup by simply clicking on the “restore” link.  You also can enable or disable automatic backups here as well.

“Replicas” is used for creating “read replicas” of your instance.  These are read-only copies that can help offload work from your primary instance.  So if your main instance is in the US and you have users in Australia, you can create a read replica in Sydney.  Your users will have faster response times since their requests don’t need to travel all the way to the US.  And your main instance will be faster because it won’t get bogged down with a bunch of extra queries from Australia.  Of course, this will not work if they need to delete or edit records.

Operations shows you a list of events for your instance.  It’s basically another place you can access the instance logs.

Ok, I need to show you one last thing.  Here is how you delete an instance.

So at this point you should understand how to create and manage a Cloud SQL instance.  Now I promised I would show you one more thing.  And that is how to migrate an external SQL database into Google Cloud.  I have MySQL running on a Virtual Machine inside Compute Engine.  And I am sick of maintaining it myself.  Instead, I want to move it over to Cloud SQL.

Well, luckily Google makes this fairly simple.  To start out, I need to click on the “Migrate data” button either here or here.  Now by default, it takes you to the “Migration jobs” screen.  But you actually first need to create a “Connection profile”.  So let me click on that.

Ok, so I need to create a profile.  You need to tell it what kind of database you will be connecting to.  You need to set a name for the profile to distinguish it from others.  You have to specify the IP address of the current database server.  Let me copy that from the Compute Engine screen.  By default, the port number is 3306, so unless you changed that you can leave this alone.

You also need to specify a valid username and password.  I am going to use the default root account and then paste in the password here.  Then you have to specify a region to save this profile to.  I think the only difference here is that this profile will be unavailable if the region it is saved in goes offline.  And finally you can specify if the server requires encryption, which mine does not.  Alright, so that should be it for the connection profile.

Now I can create a migration job.  So I have to give it a name.  You have to specify the database type that you are importing from again.  This region specifies where the new instance will live.  So make sure that you select the correct option.  And here you can set if this migration will be a continuous or one-time event.  Continuous would be useful if you wanted to keep the old instance and the new instance in sync.  One-time is best for when you want to replace the old instance with the new one.

Here they warn you about some prerequisites to make this work.  Basically, you need a way to connect to your existing instance.  You need network connectivity and a working user account.  This migration isn’t going to work if your server is behind a firewall that is blocking all connections.

So for this next part, I just need to pick the connection profile I just created.  Here I need to specify the configuration for my new Cloud SQL instance.  You can pick the name, zone, and all the same configuration options you saw before.  Once I click on this button, it is going to start creating the new database, even before the migration job has started.  Make sure you verify all your selections before proceeding.

Now I have to specify how the GCP will be able to connect to the old database.  The assumption is that the database is secured in a private network.  However, in this case, my database is actually not secured at all.  It is completely open to the internet.  So I can simply choose “IP allowlist” and not have to do anything else.  Normally accessing the database will require some additional configuration.  So you either need to configure an IP allowlist.  Or you can set up a reverse SSH tunnel, or VPC peering.  The exact steps will depend upon your existing environment.

Ok, I need to wait for my new Cloud SQL instance to come up.  This could take a while so let me fast forward a bit.  Ok, so let me save what I have done so far.  Now it would be a good idea to try a test connection and make sure I entered everything correctly.  So I will click on the “Test job” button to find out if I did everything correctly.

It works!  So now I can go ahead and create the job.  And then run it.  And there we go.  It appears that it successfully made a copy of the MySQL instance.  So now you know how to create a CloudSQL instance as well as how to import them.

About the Author
Students
31948
Courses
36
Learning Paths
14

Daniel began his career as a Software Engineer, focusing mostly on web and mobile development. After twenty years of dealing with insufficient training and fragmented documentation, he decided to use his extensive experience to help the next generation of engineers.

Daniel has spent his most recent years designing and running technical classes for both Amazon and Microsoft. Today at Cloud Academy, he is working on building out an extensive Google Cloud training library.

When he isn’t working or tinkering in his home lab, Daniel enjoys BBQing, target shooting, and watching classic movies.

Covered Topics