Creating a CloudSQL instance using config connector


This guide will cover the Config Connector for Google Cloud Platform, a tool that exposes GCP services as Kubernetes objects. Using the Cloud SQL Auth gateway or connecting directly with a private IP address, applications running on Google Kubernetes Engine can gain access to a Cloud SQL instance. Even when using a private IP address, connecting to Cloud SQL through the Cloud SQL Authentication gateway is the best option.

What's the Point of a Configuration Connector?

To control your Google Cloud resources from within Kubernetes, you can install the open-source Config Connector. To manage their infrastructure, many cloud-native software developers use a wide variety of configuration systems, application programming interfaces (APIs), and other tools. The ambiguity of this mixture is a common cause of slowed production and costly errors. With the help of Kubernetes's set of tools and APIs, a wide variety of Google Cloud resources and services may be configured with the help of Config Connector.

This is helpful, especially in situations where multiple application teams coexist and each wishes to exercise full oversight over the content of deployments made by its own GCP project. A self-service gitops workflow can be constructed for such groups utilizing config connector, ArgoCD, or Helm charts. This lessens the burden on SRE/DevOps teams so that they can concentrate on the platform's underlying infrastructure (Identity and Access Management, Networking, Security, Monitoring, Continual Integration/Continuous Delivery Pipelines, etc.).

The advantage of utilizing the config connector is that we only have to configure it once, and then we can use kubectl to deploy the GCP resources. Please take note that we are using GKE 1.24.8-gke.2000 utilizes config connector version 1.89.0.

Make a Copy of the Repository

cd $HOME
git clone git@github.com:harinderjits-git/samplekcc.git

The config of the infrastructure can be found in the YAML file below.

~/samplekcc/terraform/gcp/terragrunt/orchestration/config_env_sampleapp.yaml

In this YAML file, you should substitute your own information for the GCP Payment page and project folder, as well as the Project name & ID and also the DB password.

Billing_Account: XXXXXXXX #Change This Value In../../rundir init/init.tf 

For This Project, Modify this value in the −

Parent: archive/445XXXXXX #replace, in../../rundir init/init.tf

Start the process of terraforming the remote state. It will set up a data store for a distant state container in the specified location.

gcloud auth login #follow the prompts
cd ~/samplekcc/terraform/gcp/rundir_init
terraform init
terraform apply -auto-approve

Build up all of the GCP resources by utilizing Terraform and terra grunt.

set-env.sh cd../terragrunt/
terragrunt orchestration/basic_infrastructure/vpc apply -auto-approve
copy and paste this: cd../../prd/gke/terragrunt apply -auto-approve

Take note of the "connect command" displayed in the output; we will need this to establish a connection to the GKE cluster.

Connect command = "gcloud docker clusters get-credentials sampleappprodprimarygkeuc1 —region us-central1 —project samplemadebyme32145" in terragrunt's output #sample output below.

To use kubectl, you must first connect to the GKE cluster −

SampleAppProdPrimaryGKEUC1 —Region:US-Central1 —Project:SampleMadeByMe32145 google api container clusters get-credentials

This will make the GKE cluster the current context and add it to kubeconfig.

Setting Up the Connector Configuration

When the Config Connector plugin is activated, a ConfigConnector CustomResource is generated and set up in namespaced mode by default. According to tradition, etc. The ConfigConnector's cluster mode configuration still requires the following steps to be finished. These steps will let you set up Config Connector in namespaced mode. The config connector is being set up for cluster configuration at the moment.

cd ~/samplekcc/configconnector 
kubectl apply -f configconnector.yaml

Use Kubectl to Make a new Namespace

It is necessary to set up the location where resources will be created before you can begin using Config Connector to do so. Config Connector uses an annotation on the effective resource or an existent Namespace to determine the location of the new resource. Config Connector generates resources in the specified project, folder, or organization when a namespace is annotated. We are aiming for the use of GCP namespace annotation resources in this project.

kubectl apply -f namespace.yaml

Checking Your Setup

All of Config Connector's processes take place in a cnrm-system namespace. You can use the following command to see if the Pods are ready to go −

kubectl wait -n cnrm-system \
--for=condition=Ready pod --all

If Config Connector was deployed properly, the result would look somewhat like this −

pod/cnrm-controller-manager-0 condition met

Using the Configuration Connector, make a CloudSQL

To launch a CloudSQL instance, we may now use the configuration connection. In this case, we're reusing a manifest from before; for details, see the below −

kubectl apply -f dbinstance.yaml

That could take a while. You can monitor the progress of the reconciliation by using the following command or by seeing the Database Instance's status in the cloud management portal.

command: kubectllogs-rcp-4.5-and-rcp 8.5-n-system-cnrm-f

Create a database and a client that can access it (login).

database.yaml kubectl apply -f
kubectl apply -f dblogin.yaml

In order to verify that the intended resources have been established, you may either look at the logs generated by the configuration connection or use the cloud console.

Before testing the connections to CloudSQL using SSMS, you must first add the desktop IP to the "approved network" under the network part of the connections page for the CloudSQL Instance.

Finally, a word of caution − you may encounter errors that aren't documented in the config connector release notes. If you're doing a manual install, you'll need to upgrade the config connection first and then test.

Conclusion

This demonstration was a humble attempt to explain how your GKE cluster may be used to deploy GCP assets. You can replace kubectl with helm charts and then use ArgoCD to deploy the helm charts, and there are likely many additional routes to this end as well.

We hope this information is useful when you begin using the Config connection in your gitops workflow.

Updated on: 27-Apr-2023

126 Views

Kickstart Your Career

Get certified by completing the course

Get Started
Advertisements