You created a GCP account. Great! Now what? How do you control who can delete your production database? How do you make sure your intern can't accidentally spin up 100 VMs and hand you a $50,000 bill?
Welcome to the practical side of GCP: Identity and Access Management (IAM), billing controls, and the six different ways to create resources. This is where theory meets reality. Get this right, and your cloud infrastructure is secure and cost-effective. Get it wrong, and you're one misconfigured permission away from a security breach or budget explosion.
In this guide, you'll learn how IAM actually works (no fluff, just the equation you need), how to track and control costs without paranoia, and which deployment method to use for each situation. Whether you're a developer setting up your first project or a DevOps engineer managing team access, you'll walk away knowing exactly what to do.
Let's start with the foundation: controlling access.
IAM Fundamentals: Who Can Do What
Identity and Access Management (IAM) in GCP is surprisingly simple once you understand the equation. Forget the jargon for a second. Here's what IAM does:
The IAM Equation:
Principal (Who) + Role (What Access) + Resource (Which) = Permission
That's it. Someone (principal) gets certain permissions (role) on a specific thing (resource). Let's break down each piece.
Component 1: Principals (Who Gets Access)
Principals are identities that can access your GCP resources. Think of them as the "who" in your access control. GCP has four types:
1. Google Account
Your standard individual user account. This is what you log in with.
- Example:
alice@company.comorbob@gmail.com - Use case: Developers, admins, anyone who needs to access the GCP Console
- When to use: Individual human access
Real scenario: Your teammate Alice needs to deploy code to production. You grant her Google Account specific permissions on the production project.
2. Service Account
A non-human identity for applications and services. This is how your code authenticates to GCP APIs without storing your personal credentials.
- Example:
invoice-app@my-project.iam.gserviceaccount.com - Use case: Cloud Functions calling BigQuery, VMs accessing Cloud Storage, automated deployments
- Best practice: One service account per application
Real scenario: Your invoice processing app needs to query the Cloud SQL database. You create a service account for the app with only database read permissions. If someone steals the service account key, they can't delete your buckets or spin up expensive VMs.
3. Google Group
A collection of users and service accounts managed in Google Workspace. This is how you manage access at scale.
- Example:
backend-engineers@company.com - Use case: Assign permissions to entire teams at once
- Why it's awesome: New engineer joins? Add them to the group. They leave? Remove from group. Permissions update automatically everywhere.
Real scenario: Your backend team needs access to five different GCP projects. Instead of adding each person to each project individually (nightmare), you create a backend-engineers group, grant it the necessary roles, and manage membership in one place.
4. Google Workspace Domain
Every user in your organization's domain gets the specified access. This is very broad.
- Example: All
@101monkey.comemail addresses - Use case: Company-wide policies like "everyone can view (but not edit) all resources"
- Warning: Really broad. Use sparingly.
Real scenario: You want every employee at 101monkey to see what GCP resources exist (for transparency and learning), but not modify anything. You grant domain-level viewer access to 101monkey.com.
Component 2: Roles (What They Can Do)
Roles are collections of permissions. Instead of granting individual permissions like "compute.instances.delete" or "storage.buckets.create", you grant roles that bundle related permissions together.
GCP has three types of roles, but you'll mostly use one of them.
1. Predefined Roles (Use These)
Google-curated roles with balanced permissions. These follow the principle of least privilege, meaning they give just enough access to do a job without going overboard.
Common predefined roles:
roles/viewer: Read-only access to everything. Can see resources but not modify.roles/editor: Can modify resources but not change IAM policies or delete projects.roles/owner: Full control including IAM management and billing.roles/storage.objectViewer: Read objects from Cloud Storage buckets.roles/storage.objectAdmin: Full control over Cloud Storage objects (read, write, delete).roles/bigquery.dataViewer: Query BigQuery tables and view dataset metadata.roles/bigquery.dataEditor: Run queries and modify table data.roles/compute.instanceAdmin.v1: Create, modify, and delete VMs.roles/cloudsql.client: Connect to Cloud SQL databases.
Why predefined roles rock: Google's security team maintains them. When new permissions are added to services, the relevant roles get updated automatically. You don't have to think about the 200+ individual permissions.
2. Custom Roles (For Special Cases)
Custom roles let you create your own permission bundles. You pick and choose from existing permissions.
- When to use: Unique compliance requirements that predefined roles don't cover
- Example: A role that lets someone start and stop VMs but not create or delete them
- Limitation: Custom roles are project or organization-specific, not global
Real scenario: Your compliance team needs to audit VM configurations but shouldn't be able to change anything. None of the predefined roles fit perfectly, so you create a custom role with only the read permissions for Compute Engine.
3. Primitive Roles (Avoid in Production)
The original three roles: Owner, Editor, Viewer. They're called "primitive" for a reasonβthey're too broad for modern security practices.
- Why to avoid: Editor role gives access to almost everything except IAM. That's way too much power for most people.
- When to use: Quick prototyping, personal learning projects, testing
In production, always use predefined roles over primitive roles. Your future self will thank you when you're not debugging why an intern accidentally deleted the production database.
Component 3: Resources (What They're Accessing)
Resources are the actual GCP services you create: VMs, buckets, databases, BigQuery datasets, etc.
IAM policies can be set at different levels:
- Organization level: Applies to everything
- Folder level: Applies to all projects in that folder
- Project level: Applies to all resources in that project
- Resource level: Applies only to that specific resource (not all services support this)
Example: You can grant someone roles/storage.objectViewer on a specific bucket, letting them read objects from that one bucket but no others.
Putting It All Together: Real IAM Examples
Let's see IAM in action with practical scenarios:
Scenario 1: E-commerce Checkout App
Principal: checkout-app@my-project.iam.gserviceaccount.com (Service Account)
Role: roles/cloudsql.client
Resource: Cloud SQL instance "orders-db"
Result: The checkout app can connect to the orders database, nothing else
Why this works: The app only needs database access. It doesn't need to create buckets, spin up VMs, or modify BigQuery datasets. If the service account credentials leak somehow, the damage is limited to database access.
Scenario 2: Developer Alice Needs Dev Access
Principal: alice@company.com (Google Account)
Role: roles/compute.instanceAdmin.v1
Resource: Project "web-app-dev"
Result: Alice can create, modify, and delete VMs in the dev project, but can't touch production
Why this works: Alice needs full VM control for development work, but keeping her access limited to the dev project means she can't accidentally break production.
Scenario 3: Engineering Team Visibility
Principal: engineering@101monkey.com (Google Group)
Role: roles/viewer
Resource: Organization
Result: All engineers can view all resources across all projects, but can't modify anything
Why this works: Transparency is good. Engineers can see what infrastructure exists, learn from it, and debug issues. But they can't accidentally delete things while browsing around.
Policy Inheritance: How Permissions Flow Down
Here's where GCP gets really powerful. Permissions don't just apply where you set themβthey cascade downward through your resource hierarchy.
How Inheritance Works
Organization: 101monkey.com (Policy: All employees = Viewer)
βββ Folder: Engineering (Policy: Engineering group = Editor on engineering projects)
βββ Project: web-app-prod (Policy: Alice = Owner)
βββ Resource: VM instance (Policy: Service account = Admin on this VM)
What happens:
- All 101monkey employees can view everything (organization policy)
- Engineering group can edit resources in engineering projects (folder policy)
- Alice has full control over the web-app-prod project (project policy)
- The service account has admin access to that specific VM (resource policy)
Key rules:
- Child inherits parent permissions: Alice gets Viewer from organization + Editor from engineering folder + Owner from project = full Owner access
- Children can ADD, not REMOVE: You can grant more permissions at lower levels, but can't revoke stricter parent policies
- Least permissive wins: If organization says "no one can use Compute Engine", no lower-level policy can override that
Allow Policies: The JSON Behind IAM
When you grant permissions, GCP stores them as IAM policies in JSON format:
{
"bindings": [
{
"role": "roles/storage.admin",
"members": [
"user:admin@101monkey.com",
"serviceAccount:backup-job@project.iam.gserviceaccount.com"
]
},
{
"role": "roles/storage.objectViewer",
"members": [
"group:developers@101monkey.com"
]
}
]
}
This policy says: admin@101monkey.com and the backup-job service account have full storage admin access. Everyone in the developers group has read-only access to storage objects.
You rarely write these by handβthe Console and gcloud do it for you. But understanding the structure helps when debugging permission issues.
IAM Best Practices (The Rules That Actually Matter)
1. Use Predefined Roles, Not Primitive Roles
Don't grant roles/editor unless you really mean "access to almost everything". Use specific roles like roles/compute.instanceAdmin.v1 or roles/bigquery.dataEditor.
2. Service Accounts for All Applications
Never put your personal credentials in code. Never check service account keys into Git. Create service accounts for apps, rotate keys regularly, and revoke old keys.
3. Assign Roles to Groups, Not Individual Users
Managing individual user permissions doesn't scale. Create groups (like backend-devs, frontend-team, data-analysts), assign roles to groups, and manage membership in Google Workspace.
4. Principle of Least Privilege
Grant the minimum permissions needed. Start restrictive and add permissions when someone actually needs them. It's easier to grant more access than to revoke it after something breaks.
5. Separate Environments
Use different projects for dev, staging, and production. Developers get full access to dev, read-only to staging, and emergency-only access to production. This prevents "I was just testing and accidentally deleted prod" disasters.
6. Enable Audit Logging
Turn on Cloud Audit Logs so you can see who granted what permissions, when, and to whom. When something breaks or a security incident happens, you'll need this trail.
Billing & Budget Management: Avoiding Surprise Bills
Cloud bills can spiral fast if you're not paying attention. Let's talk about how GCP billing works and how to control costs without constantly worrying.
How GCP Billing Works
Pay-as-you-go model: No upfront costs. You pay for what you use, when you use it.
Per-second billing: Most services bill per second, not per hour. Run a VM for 8 minutes? You pay for 8 minutes. AWS bills hourly, so you'd pay for a full hour.
Stopped resources = no compute costs: Stop a VM and you stop paying for compute (but you still pay for attached disks). Delete resources you're not using.
Billing Hierarchy
Billing Account (Credit card or invoice)
β
βββ Project: web-app-prod
βββ Project: analytics-pipeline
βββ Project: ml-experiments
Billing account: Links to your payment method. Can be linked to multiple projects.
Project-billing link: Each project needs a billing account. Unlinked projects have their resources disabled after a grace period.
Two types of billing accounts:
- Self-serve: Credit card, immediate setup, good for startups
- Invoiced: Monthly billing, requires Google approval, good for enterprises
GCP Free Tier
Google gives you free resources to learn and experiment:
Always-free tier (every month):
- 1 f1-micro VM instance (US regions only)
- 5 GB Cloud Storage (Standard class)
- 1 GB BigQuery queries per month
- 2 million Cloud Functions invocations
New account credit: $300 credit valid for 90 days. This is plenty for learning GCP without worrying about costs.
Tracking and Controlling Costs
GCP gives you several tools to monitor spending and set limits.
1. Resource Manager: Organize for Cost Visibility
Use folders to group projects by department, environment, or cost center. View costs per folder in billing reports.
Example structure:
Organization: 101monkey.com
βββ Folder: Engineering
β βββ Project: web-app-prod (main costs here)
β βββ Project: web-app-dev (minimal costs)
βββ Folder: Marketing
βββ Project: analytics (moderate costs)
Now you can see: "Engineering costs $2,000/month, Marketing costs $500/month". This makes budget allocation and accountability clear.
2. Labels: Tag Resources for Cost Allocation
Labels are key-value pairs you attach to resources. They show up in billing reports, letting you slice costs however you want.
Common label patterns:
- Environment:
env:prod,env:dev,env:staging - Team:
team:backend,team:frontend,team:data - Cost center:
cost-center:engineering,cost-center:marketing - Project:
project:user-auth,project:payment-processing
How to add labels:
- GCP Console: Edit resource β Add labels
- gcloud:
gcloud compute instances add-labels my-vm --labels=env=prod,team=backend - Terraform: Every resource has a
labelsparameter
Why labels matter: Your CFO asks "How much do we spend on dev vs production?" With labels, you filter the billing report by env:prod and get an instant answer.
3. Quotas & Limits: Prevent Runaway Costs
Quotas are service-specific caps that prevent accidentally using too much.
Examples:
- Max 24 CPUs per region (can be increased)
- Max 100 requests/second to an API
- Max 10 TB BigQuery query data processed per day
Why they're useful: Someone misconfigures auto-scaling and tries to create 1,000 VMs. Quotas stop it at 24 CPUs. Crisis averted.
How to check quotas: GCP Console β IAM & Admin β Quotas How to increase: Click "Request quota increase", explain why you need more, usually approved within hours
4. Budgets & Alerts: Get Warned Before Overspending
Set monthly budgets and get alerts when you approach them.
How to set up:
- Go to Billing β Budgets & alerts
- Set budget amount (e.g., $1,000/month)
- Set alert thresholds: 50%, 90%, 100%
- Choose notification method: Email or Pub/Sub (for automation)
Example alert setup:
Budget: $1,000/month
Alerts: 50% ($500), 90% ($900), 100% ($1,000)
Action: Email to engineering@101monkey.com
When you hit $500, you get an email. At $900, another email with urgency. At $1,000, you know you're at budget and can decide whether to increase it or optimize costs.
Advanced: Auto-disable billing: You can trigger a Cloud Function to shut down non-critical resources when budget is exceeded. Risky for production (you don't want your site going down because you hit budget), but great for dev/test environments.
5. Cost Optimization Tips That Actually Work
- Committed use discounts: Commit to 1 or 3 years and save 50-70%. Like reserved instances on AWS but more flexible.
- Right-size VMs: Don't use n1-standard-8 (8 CPUs, 30 GB RAM) when e2-medium (2 CPUs, 4 GB RAM) works fine. GCP's Recommender suggests right-sizing.
- Preemptible VMs: 80% cheaper for batch jobs and fault-tolerant workloads. They can be terminated with 30 seconds notice, but if your job can handle restarts, you save big.
- Delete unused resources: Old snapshots, detached disks, static IPs not attached to VMsβthese all cost money. Clean up regularly.
- Cloud Storage lifecycle policies: Move old data to Nearline or Coldline storage (cheaper for infrequently accessed data).
6. Billing Reports & BigQuery Export
GCP Console β Billing β Reports shows costs by:
- Service (Compute Engine vs Cloud Storage vs BigQuery)
- Project
- SKU (specific resource type like "n1-standard-4 VM in us-central1")
- Location (which region)
Pro move: Export billing data to BigQuery (it's free). Then you can run SQL queries like:
SELECT
project.name,
service.description,
SUM(cost) as total_cost
FROM `billing_export.gcp_billing_export_v1_XXXXX`
WHERE DATE(usage_start_time) >= '2026-01-01'
GROUP BY project.name, service.description
ORDER BY total_cost DESC
LIMIT 10;
Now you know exactly which projects and services cost the most. Build dashboards in Looker or Data Studio for ongoing visibility.
Real-World Cost Management Example
Scenario: Startup with $5,000/month budget
Actions:
- Create billing account with $5,000 budget
- Set alerts at $2,500 (50%), $4,500 (90%), $5,000 (100%)
- Label all resources:
env:prodorenv:dev - Check billing report weekly: 60% costs are prod, 40% are dev
- Optimization: Move dev VMs to preemptible instances (save 40% on dev costs)
- Result: Dropped from $5,200/month to $4,400/month. Under budget with clear cost visibility.
Six Ways to Create GCP Resources
There's no single "right" way to create resources in GCP. Different methods for different situations. Let's walk through all six so you know which to use when.
Method 1: Google Cloud Console (UI)
What it is: The web interface at console.cloud.google.com
When to use: Learning GCP, quick testing, one-off resource creation
Pros:
- Visual and intuitive
- No code required
- Great for beginners
- See all options clearly
Cons:
- Not reproducible (hard to recreate exactly)
- Manual (doesn't scale to 100 VMs)
- Time-consuming for repetitive tasks
Example: Creating a VM
- Go to console.cloud.google.com
- Navigate: Compute Engine β VM instances
- Click "Create Instance"
- Fill the form:
- Name:
my-test-vm - Region:
us-central1 - Zone:
us-central1-a - Machine type:
e2-medium(2 vCPU, 4 GB memory) - Boot disk: Debian 11
- Name:
- Click "Create"
- Wait 30 seconds β VM is running
Best for: Exploring GCP features, creating resources while learning, quick experiments
Method 2: gcloud CLI (Command Line)
What it is: Command-line tool for managing GCP resources
When to use: Automation, scripting, CI/CD pipelines, when you want speed
Pros:
- Fast (one command vs clicking through UI)
- Scriptable (put commands in bash scripts)
- Reproducible (same command = same result)
- Works in headless environments (servers, CI/CD)
Cons:
- Learning curve (need to know command syntax)
- Typos can cause issues
- Need SDK installed locally
Installation:
# Linux
curl https://sdk.cloud.google.com | bash
exec -l $SHELL
# macOS
brew install google-cloud-sdk
# Windows
# Download installer from cloud.google.com/sdk/docs/install
# Initialize and authenticate
gcloud init
gcloud auth login
gcloud config set project MY-PROJECT-ID
Example: Create a VM
gcloud compute instances create my-vm \
--zone=us-central1-a \
--machine-type=e2-medium \
--image-family=debian-11 \
--image-project=debian-cloud \
--boot-disk-size=10GB \
--tags=web-server
Common gcloud commands:
# List all projects
gcloud projects list
# List VMs
gcloud compute instances list
# Create Cloud Storage bucket
gcloud storage buckets create gs://my-unique-bucket-name
# Deploy Cloud Function
gcloud functions deploy my-function \
--runtime=python311 \
--trigger-http \
--allow-unauthenticated
# View BigQuery datasets
gcloud alpha bq datasets list
# SSH into a VM
gcloud compute ssh my-vm --zone=us-central1-a
Best for: Automation scripts, CI/CD pipelines, developers comfortable with terminals
Method 3: Terraform (Infrastructure as Code)
What it is: Declarative infrastructure-as-code tool by HashiCorp. You describe what you want, Terraform figures out how to create it.
When to use: Reproducible environments, team collaboration, version-controlled infrastructure, production deployments
Pros:
- Version control in Git (review, rollback, collaborate)
- Declarative (describe desired state, not steps)
- Plan before apply (see changes before making them)
- Reusable modules
- Works across clouds (GCP, AWS, Azure)
Cons:
- Learning curve (HCL syntax, state management)
- State file management complexity
- Requires understanding of infrastructure concepts
Why Terraform?
Imagine you need to recreate your entire infrastructure in a new region. With Console/gcloud, you'd click or type hundreds of commands. With Terraform, you change one variable (region = "europe-west4") and apply. Done.
Terraform workflow:
1. init β Download provider plugins
2. plan β Preview what will change
3. apply β Execute changes
4. destroy β Remove all resources
Example: Create a VM
Create a file called main.tf:
provider "google" {
project = "my-project-id"
region = "us-central1"
}
resource "google_compute_instance" "web_server" {
name = "terraform-vm"
machine_type = "e2-medium"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-11"
}
}
network_interface {
network = "default"
access_config {
// Ephemeral external IP
}
}
tags = ["web-server"]
}
Run it:
terraform init # Download Google provider
terraform plan # Shows: +1 resource to add
terraform apply # Creates the VM
# Type 'yes' to confirm
# Later, if you want to delete
terraform destroy # Removes everything
Terraform components:
- Provider: Cloud platform plugin (
google,aws,azurerm) - Resource: Thing to create (
google_compute_instance,google_storage_bucket) - Module: Reusable collection of resources
- State: Terraform tracks what it created in
terraform.tfstate - Variable: Parameterize your config (
var.region,var.environment)
Best for: Production infrastructure, team environments, multi-environment setups (dev/staging/prod), GitOps workflows
Method 4: REST APIs & Client Libraries
What it is: Programmatic access to GCP services via HTTP requests or language-specific SDKs
When to use: Building custom applications that dynamically create/manage GCP resources, integrating GCP into existing tools
Pros:
- Full programmatic control
- Integrate directly into applications
- Available in multiple languages (Python, Java, Node.js, Go, etc.)
Cons:
- More code to write
- Handle authentication yourself
- Need to understand API structure
REST API Example (Python with requests):
import requests
# Get OAuth token
token = "ya29.xxx" # From: gcloud auth print-access-token
# Create VM via REST API
url = "https://compute.googleapis.com/compute/v1/projects/my-project/zones/us-central1-a/instances"
headers = {"Authorization": f"Bearer {token}"}
body = {
"name": "api-created-vm",
"machineType": "zones/us-central1-a/machineTypes/e2-medium",
"disks": [{
"boot": True,
"initializeParams": {
"sourceImage": "projects/debian-cloud/global/images/family/debian-11"
}
}],
"networkInterfaces": [{
"network": "global/networks/default",
"accessConfigs": [{"type": "ONE_TO_ONE_NAT", "name": "External NAT"}]
}]
}
response = requests.post(url, json=body, headers=headers)
print(response.json())
Client Library Example (Python):
from google.cloud import compute_v1
client = compute_v1.InstancesClient()
instance = compute_v1.Instance(
name="sdk-created-vm",
machine_type="zones/us-central1-a/machineTypes/e2-medium",
disks=[compute_v1.AttachedDisk(
boot=True,
initialize_params=compute_v1.AttachedDiskInitializeParams(
source_image="projects/debian-cloud/global/images/family/debian-11"
)
)],
network_interfaces=[compute_v1.NetworkInterface(
network="global/networks/default",
access_configs=[compute_v1.AccessConfig(
name="External NAT",
type_="ONE_TO_ONE_NAT"
)]
)]
)
operation = client.insert(
project="my-project",
zone="us-central1-a",
instance_resource=instance
)
print(f"VM creation started: {operation.name}")
Best for: Custom control planes, dynamic resource provisioning in applications, building tools on top of GCP
Method 5: GCP Marketplace
What it is: Pre-configured software stacks and solutions ready to deploy with one click
When to use: Need third-party software (WordPress, MongoDB, Cassandra) without manual setup
Pros:
- One-click deployment
- Pre-configured by vendors
- Maintained and updated by software vendors
- Includes licensing where needed
Cons:
- Less customization than building from scratch
- Potential vendor lock-in
- Costs may include software licensing fees
Example: Deploy WordPress
- Go to GCP Console β Marketplace
- Search "WordPress"
- Click "WordPress Certified by Bitnami"
- Click "Launch"
- Configure:
- Deployment name:
my-blog - Zone:
us-central1-a - Machine type:
e2-medium
- Deployment name:
- Click "Deploy"
- Wait 5 minutes β WordPress site is live
You get a full WordPress installation with Apache, MySQL, PHP, SSL certificates, and backupsβall configured and ready to go.
Popular Marketplace solutions:
- Databases: MongoDB, PostgreSQL, Redis, Cassandra, Neo4j
- CMS: WordPress, Drupal, Joomla
- Analytics: Elasticsearch, Grafana, Kibana
- Security: Palo Alto firewalls, F5 load balancers
- Development: GitLab, Jenkins
Best for: Rapid deployment of standard software, teams without deep ops expertise, proof-of-concepts
Method 6: Cloud Shell
What it is: Browser-based Linux terminal with GCP tools pre-installed
When to use: Quick commands without installing SDK locally, working from public computers, need Terraform/kubectl without local setup
Pros:
- Zero setup (opens in browser)
- gcloud, terraform, kubectl pre-installed
- 5 GB persistent storage (saves your files between sessions)
- Authenticated automatically with your GCP account
Cons:
- Session timeout after inactivity
- Limited compute resources (1.7 GB RAM)
- Not for long-running tasks
How to use:
- GCP Console β Click Cloud Shell icon (top right, looks like
>_) - Terminal opens at bottom of browser
- Run commands:
# Already authenticated, no need for gcloud auth login
gcloud compute instances list
# Clone a repo
git clone https://github.com/GoogleCloudPlatform/training-data-analyst.git
# Run Terraform
terraform init
terraform plan
# Edit files
nano my-script.sh
Pre-installed tools:
- gcloud, gsutil (GCP tools)
- terraform, kubectl (infrastructure tools)
- git, vim, nano (dev tools)
- python, node, go (runtimes)
- docker (container management)
Best for: Quick gcloud commands, testing scripts, accessing resources without local SDK, demos and tutorials
Comparison: Which Method to Use When
| Method | Best For | Pros | Cons | Learning Curve |
|---|---|---|---|---|
| Console | Learning, testing | Visual, easy, no code | Not scalable, manual | Low |
| gcloud | Automation, scripts | Fast, scriptable | Command syntax | Medium |
| Terraform | Production IaC | Reproducible, version control | State management | High |
| APIs/SDKs | Custom apps | Full control, integration | More code | High |
| Marketplace | Third-party apps | One-click deploy | Less control | Low |
| Cloud Shell | Quick tasks | No local setup | Limited resources | Low |
Real-world approach: Most teams use a combination:
- Console: Exploring new services, quick debugging
- Terraform: Managing production infrastructure
- gcloud: One-off tasks, CI/CD pipelines
- Cloud Shell: When working from different computers
What's Next: Hands-On with GCP Core Services
You now understand how to control access with IAM, manage costs with billing tools, and create resources using six different methods. These are the foundations you'll use every day working with GCP.
Key takeaways:
- IAM: Principal + Role + Resource = Permission. Use predefined roles, service accounts for apps, groups for teams.
- Billing: Set budgets, use labels for cost tracking, leverage free tier for learning.
- Deployment: Console for learning, gcloud for automation, Terraform for production, APIs for custom tools.
In the next post, we'll get hands-on with GCP's core services: Compute Engine (VMs), BigQuery (data warehouse), and Cloud Storage (object storage). You'll learn when to use each service, how to configure them properly, and how to avoid common mistakes.
Action items for now:
- Create a service account for a test application
- Set up a billing budget with alerts at 50% and 90%
- Try deploying a VM using both gcloud and Terraform
- Add labels to existing resources (
env:dev,team:yourname)
Ready to start creating actual infrastructure? See you in Post 3 where we dive deep into Compute Engine, BigQuery, and Cloud Storage with practical examples you can follow along with.