Steampipe: The SQL Swiss Army Knife for Cloud Mischief

The DevSecOps Cloud Visibility Challenge
Your cloud infrastructure is expanding faster than your security team can audit it. Every deployment, every auto-scaling event, every developer-created resource potentially introduces security gaps. You need real-time visibility across your entire multi-cloud environment, but traditional security tools give you point-in-time snapshots, not the continuous monitoring that modern DevSecOps requires.
This is where Steampipe becomes your DevSecOps force multiplier - transforming complex cloud APIs into SQL queries that integrate seamlessly with your existing automation and compliance workflows.
Steampipe: SQL-Driven Cloud Security Intelligence
Think of Steampipe as your cloud infrastructure becoming a massive, queryable database. Every resource, every configuration, every security setting across AWS, Azure, GCP, and 50+ other platforms becomes a table you can query, join, and analyze using familiar SQL syntax.
But this isn’t just about querying - it’s about enabling Policy as Code at enterprise scale. Want to validate your CIS Kubernetes Benchmark compliance across 15 clusters? That’s a SQL query. Need to audit IAM permissions across multiple AWS accounts for SOC2 compliance? SQL query. Want to identify configuration drift in your Terraform-managed infrastructure? You get the idea.
Steampipe transforms cloud security from reactive incident response into proactive, automated governance that integrates directly into your CI/CD pipelines and compliance workflows.
Enterprise DevSecOps Deployment
Steampipe is designed for enterprise-scale DevSecOps operations with support for containerized deployments, infrastructure as code, and CI/CD integration.
Container-First Installation (Recommended)
For production DevSecOps environments, deploy Steampipe as a containerized service:
# Dockerfile for enterprise Steampipe deployment
FROM turbot/steampipe:latest
# Install required plugins for enterprise multi-cloud monitoring
RUN steampipe plugin install aws azure gcp kubernetes terraform \
github gitlab docker oci helm vault consul nomad
# Copy enterprise configuration
COPY steampipe.spc /home/steampipe/.steampipe/config/
COPY enterprise-queries/ /opt/steampipe/queries/
COPY compliance-policies/ /opt/steampipe/policies/
# Set up service user for automated queries
USER steampipe
WORKDIR /opt/steampipe
# Default command for CI/CD integration
CMD ["steampipe", "service", "start", "--host=0.0.0.0", "--port=9193"]
Kubernetes Deployment for Scale
Deploy Steampipe as a Kubernetes service for enterprise-wide cloud security monitoring:
# steampipe-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: steampipe-security-service
namespace: devsecops
spec:
replicas: 3
selector:
matchLabels:
app: steampipe
template:
metadata:
labels:
app: steampipe
spec:
serviceAccountName: steampipe-sa
containers:
- name: steampipe
image: your-registry/steampipe-enterprise:latest
ports:
- containerPort: 9193
env:
- name: STEAMPIPE_LOG_LEVEL
value: 'info'
- name: STEAMPIPE_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: steampipe-db-secret
key: password
volumeMounts:
- name: steampipe-config
mountPath: /home/steampipe/.steampipe/config
readOnly: true
- name: cloud-credentials
mountPath: /home/steampipe/.aws
readOnly: true
volumes:
- name: steampipe-config
configMap:
name: steampipe-enterprise-config
- name: cloud-credentials
secret:
secretName: cloud-access-credentials
---
apiVersion: v1
kind: Service
metadata:
name: steampipe-service
namespace: devsecops
spec:
selector:
app: steampipe
ports:
- port: 9193
targetPort: 9193
type: ClusterIP
Enterprise Plugin Configuration
Install and configure plugins for comprehensive multi-cloud coverage:
# Core cloud providers
steampipe plugin install aws azure gcp
# Container and orchestration platforms
steampipe plugin install kubernetes docker oci
# Infrastructure as Code
steampipe plugin install terraform consul vault nomad
# DevSecOps toolchain integration
steampipe plugin install github gitlab jenkins splunk prometheus grafana
# Compliance and security frameworks
steampipe plugin install crowdstrike qualys tenable rapid7
# Configure multi-account access
steampipe connection import aws-accounts.spc
steampipe connection import azure-subscriptions.spc
steampipe connection import gcp-projects.spc
CI/CD Pipeline Integration
Configure Steampipe for automated security validation in your deployment pipelines:
# .github/workflows/security-compliance.yml
name: Automated Security Compliance
on:
schedule:
- cron: '0 */6 * * *' # Every 6 hours
push:
branches: [main]
paths: ['infrastructure/**', 'k8s/**']
jobs:
security-validation:
runs-on: ubuntu-latest
container:
image: turbot/steampipe:latest
steps:
- name: Configure Cloud Access
run: |
# Configure AWS access
aws configure set aws_access_key_id ${{ secrets.AWS_ACCESS_KEY_ID }}
aws configure set aws_secret_access_key ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# Configure Azure access
az login --service-principal -u ${{ secrets.AZURE_CLIENT_ID }} \
-p ${{ secrets.AZURE_CLIENT_SECRET }} \
--tenant ${{ secrets.AZURE_TENANT_ID }}
- name: Install Enterprise Plugins
run: |
steampipe plugin install aws azure gcp kubernetes terraform
- name: Run Compliance Validation
run: |
# CIS Kubernetes Benchmark validation
steampipe query \
"select * from kubernetes_pod where security_context is null" \
--output json > cis-k8s-violations.json
# AWS Security best practices
steampipe query \
"select * from aws_s3_bucket where acl like '%public%'" \
--output json > aws-security-violations.json
# Multi-cloud resource inventory
steampipe query \
"select 'aws' as cloud, instance_id as resource, region from aws_ec2_instance
union all
select 'azure' as cloud, name as resource, location from azure_compute_virtual_machine
union all
select 'gcp' as cloud, name as resource, zone from gcp_compute_instance" \
--output json > multi-cloud-inventory.json
- name: Generate Compliance Report
run: |
# Generate executive summary
steampipe dashboard --dashboard compliance-overview \
--export compliance-report.pdf
- name: Upload Compliance Artifacts
uses: actions/upload-artifact@v3
with:
name: compliance-reports
path: |
*.json
*.pdf
Your First Query: Welcome to the Dark Side
Let’s start with something simple. Fire up Steampipe and let’s see what kind of digital chaos you’ve created:
steampipe query
This drops you into a SQL shell that’s connected to your cloud APIs. It’s like psql
but for clouds, and way more dangerous.
Finding Your EC2 Instances (The Obvious Ones)
-- Let's see what EC2 instances you're running
SELECT
instance_id,
instance_type,
state,
region,
tags
FROM aws_ec2_instance
WHERE state = 'running';
The “Oh Shit” Moment: Public Resources
-- Find publicly accessible RDS instances
-- (Spoiler alert: you probably have some)
SELECT
db_instance_identifier,
engine,
publicly_accessible,
endpoint_address,
region
FROM aws_rds_db_instance
WHERE publicly_accessible = true;
-- S3 buckets that are basically giving away free data
SELECT
name,
region,
acl,
policy_std
FROM aws_s3_bucket
WHERE acl LIKE '%public%'
OR policy_std::text LIKE '%"Principal": "*"%';
Advanced Queries: Where the Real Fun Begins
Once you get comfortable with basic queries, you can start building more sophisticated CIEM (Cloud Infrastructure Entitlement Management) rules. Think of these as “security reality checks” for your cloud.
The “Who Has Root Access?” Query
-- Find IAM users who basically own your AWS account
SELECT
u.name as user_name,
u.create_date,
STRING_AGG(p.policy_name, ', ') as attached_policies
FROM aws_iam_user u
JOIN aws_iam_user_attached_policy p ON u.name = p.user_name
WHERE p.policy_arn = 'arn:aws:iam::aws:policy/AdministratorAccess'
OR p.policy_name LIKE '%Admin%'
GROUP BY u.name, u.create_date
ORDER BY u.create_date DESC;
The “Unused Resources Eating Your Budget” Detective
-- EC2 instances that haven't been touched in ages
SELECT
instance_id,
instance_type,
launch_time,
state,
ROUND(EXTRACT(epoch FROM (NOW() - launch_time))/86400) as days_running,
region
FROM aws_ec2_instance
WHERE state = 'running'
AND launch_time < NOW() - INTERVAL '30 days'
AND instance_id NOT IN (
SELECT DISTINCT instance_id
FROM aws_cloudwatch_metric_statistic_data_point
WHERE timestamp > NOW() - INTERVAL '7 days'
AND metric_name = 'CPUUtilization'
AND average > 5 -- More than 5% CPU usage
)
ORDER BY days_running DESC;
The “Security Group Hall of Shame”
-- Security groups that are basically "welcome everyone!"
SELECT
group_id,
group_name,
description,
vpc_id,
region,
ip_permissions
FROM aws_ec2_security_group,
jsonb_array_elements(ip_permissions) as perm
WHERE perm->>'IpProtocol' = '-1' -- All protocols
OR (
perm->'IpRanges' @> '[{"CidrIp": "0.0.0.0/0"}]'::jsonb
AND (
perm->'FromPort' IS NULL
OR perm->>'FromPort' IN ('22', '3389', '80', '443')
)
);
Creating Custom CIEM Rules: Building Your Security Empire
Now let’s get into the really fun stuff - creating custom rules that help you maintain some semblance of order in your cloud chaos.
Rule 1: The “Crypto Mining Detector”
-- Instances that might be mining cryptocurrency
-- (High CPU, suspicious instance types, running 24/7)
CREATE VIEW suspicious_crypto_mining AS
SELECT
e.instance_id,
e.instance_type,
e.region,
e.launch_time,
ROUND(EXTRACT(epoch FROM (NOW() - e.launch_time))/86400) as days_running,
e.tags
FROM aws_ec2_instance e
WHERE e.state = 'running'
AND e.instance_type IN ('m5.large', 'm5.xlarge', 'c5.large', 'c5.xlarge', 'c5.2xlarge')
AND e.launch_time < NOW() - INTERVAL '7 days'
AND (e.tags IS NULL OR NOT (e.tags ? 'Environment' OR e.tags ? 'Project'));
Rule 2: The “Privileged Service Account Audit”
-- Service accounts with admin privileges (usually a bad sign)
CREATE VIEW overprivileged_service_accounts AS
SELECT
u.name,
u.path,
u.create_date,
CASE
WHEN u.password_last_used IS NULL THEN 'Never logged in'
ELSE u.password_last_used::text
END as last_login,
STRING_AGG(p.policy_name, ', ') as admin_policies
FROM aws_iam_user u
JOIN aws_iam_user_attached_policy p ON u.name = p.user_name
WHERE (u.name LIKE '%service%' OR u.name LIKE '%app%' OR u.name LIKE '%system%')
AND (
p.policy_arn LIKE '%Administrator%'
OR p.policy_arn LIKE '%PowerUser%'
OR p.policy_name LIKE '%Admin%'
)
GROUP BY u.name, u.path, u.create_date, u.password_last_used;
Rule 3: The “Data Breach Waiting to Happen”
-- S3 buckets with dangerous combinations
CREATE VIEW data_breach_candidates AS
SELECT
b.name,
b.region,
b.versioning_enabled,
b.server_side_encryption_configuration,
CASE
WHEN b.policy_std::text LIKE '%"Principal": "*"%' THEN 'Public Read'
WHEN b.acl LIKE '%public%' THEN 'Public ACL'
ELSE 'Check manually'
END as exposure_type,
CASE
WHEN b.server_side_encryption_configuration IS NULL THEN 'No encryption'
ELSE 'Encrypted'
END as encryption_status
FROM aws_s3_bucket b
WHERE (
b.policy_std::text LIKE '%"Principal": "*"%'
OR b.acl LIKE '%public%'
)
AND (
b.name LIKE '%backup%'
OR b.name LIKE '%data%'
OR b.name LIKE '%logs%'
OR b.name LIKE '%private%'
);
Multi-Cloud Queries: Because Vendor Lock-in is for Quitters
One of Steampipe’s superpowers is querying across multiple cloud providers in a single query. Here’s where things get really interesting:
Cross-Cloud Resource Inventory
-- Get a unified view of compute resources across AWS, Azure, and GCP
SELECT
'AWS' as provider,
instance_id as resource_id,
instance_type as size,
state,
region,
tags::text as labels
FROM aws_ec2_instance
UNION ALL
SELECT
'Azure' as provider,
vm_id as resource_id,
vm_size as size,
power_state as state,
region,
tags::text as labels
FROM azure_compute_virtual_machine
UNION ALL
SELECT
'GCP' as provider,
name as resource_id,
machine_type as size,
status as state,
zone as region,
labels::text as labels
FROM gcp_compute_instance;
The “Who’s Spending What” Multi-Cloud Audit
-- Compare costs across providers (requires cost management APIs)
WITH cloud_costs AS (
SELECT
'AWS' as provider,
service,
SUM(unblended_cost_amount) as monthly_cost
FROM aws_cost_by_service_monthly
WHERE period_start >= date_trunc('month', CURRENT_DATE)
GROUP BY service
UNION ALL
SELECT
'Azure' as provider,
meter_category as service,
SUM(cost) as monthly_cost
FROM azure_consumption_usage_detail
WHERE billing_period_start >= date_trunc('month', CURRENT_DATE)
GROUP BY meter_category
)
SELECT provider, service, monthly_cost
FROM cloud_costs
WHERE monthly_cost > 100 -- Only show services costing more than $100/month
ORDER BY monthly_cost DESC;
Enterprise DevSecOps Use Cases
Scenario 1: Automated Compliance Validation for SOC2 Type II
Challenge: Demonstrating continuous compliance monitoring across multi-cloud infrastructure for SOC2 audit.
Solution: Automated compliance queries integrated into CI/CD pipeline with audit trail generation.
-- SOC2 CC6.1: Logical Access Controls
-- Query to validate all production databases have access logging enabled
CREATE VIEW soc2_database_access_controls AS
SELECT
'aws-rds' as service,
db_instance_identifier as resource_id,
region,
CASE
WHEN enabled_cloudwatch_logs_exports IS NOT NULL
THEN 'COMPLIANT: Audit logging enabled'
ELSE 'NON-COMPLIANT: Missing audit logs'
END as compliance_status,
'SOC2-CC6.1' as control_id,
NOW() as assessment_timestamp
FROM aws_rds_db_instance
WHERE tags->>'Environment' = 'production'
UNION ALL
SELECT
'azure-sql' as service,
name as resource_id,
location as region,
CASE
WHEN auditing_policy_state = 'Enabled'
THEN 'COMPLIANT: Audit logging enabled'
ELSE 'NON-COMPLIANT: Missing audit logs'
END as compliance_status,
'SOC2-CC6.1' as control_id,
NOW() as assessment_timestamp
FROM azure_sql_database
WHERE tags->>'Environment' = 'production';
-- SOC2 CC6.7: Data Transmission Controls
-- Validate encryption in transit for all public-facing services
CREATE VIEW soc2_encryption_in_transit AS
SELECT
'aws-elb' as service,
load_balancer_arn as resource_id,
region,
CASE
WHEN scheme = 'internet-facing' AND
EXISTS (SELECT 1 FROM jsonb_array_elements(listeners) l
WHERE l->>'Protocol' = 'HTTPS')
THEN 'COMPLIANT: HTTPS enforced'
WHEN scheme = 'internet-facing'
THEN 'NON-COMPLIANT: HTTP traffic allowed'
ELSE 'NOT-APPLICABLE: Internal load balancer'
END as compliance_status,
'SOC2-CC6.7' as control_id,
NOW() as assessment_timestamp
FROM aws_elbv2_load_balancer
WHERE tags->>'Environment' = 'production';
Automation Script for Continuous Compliance:
#!/bin/bash
# continuous-compliance-monitoring.sh
# Export compliance results for audit trail
steampipe query "SELECT * FROM soc2_database_access_controls" \
--output json \
--output-file "compliance-reports/soc2-cc61-$(date +%Y%m%d-%H%M%S).json"
steampipe query "SELECT * FROM soc2_encryption_in_transit" \
--output json \
--output-file "compliance-reports/soc2-cc67-$(date +%Y%m%d-%H%M%S).json"
# Check for non-compliant resources
NON_COMPLIANT=$(steampipe query \
"SELECT COUNT(*) as violations FROM (
SELECT * FROM soc2_database_access_controls WHERE compliance_status LIKE 'NON-COMPLIANT%'
UNION ALL
SELECT * FROM soc2_encryption_in_transit WHERE compliance_status LIKE 'NON-COMPLIANT%'
) violations" --output json | jq '.[0].violations')
if [ "$NON_COMPLIANT" -gt 0 ]; then
echo "🚨 COMPLIANCE ALERT: $NON_COMPLIANT SOC2 violations detected"
# Send alert to compliance team
curl -X POST "$SLACK_WEBHOOK_URL" \
-H 'Content-type: application/json' \
--data "{\"text\":\"SOC2 Compliance Alert: $NON_COMPLIANT violations detected in production environment\"}"
fi
Scenario 2: Infrastructure as Code Drift Detection
Challenge: Terraform-managed infrastructure drifts from desired state, creating security and compliance gaps.
Solution: Automated drift detection using Steampipe to compare live state against Terraform configuration.
-- Detect S3 buckets created outside of Terraform management
CREATE VIEW terraform_drift_s3_buckets AS
WITH terraform_managed_buckets AS (
SELECT
name as bucket_name,
'managed' as management_status
FROM terraform_resource
WHERE type = 'aws_s3_bucket'
),
live_buckets AS (
SELECT
name as bucket_name,
creation_date,
tags
FROM aws_s3_bucket
)
SELECT
lb.bucket_name,
lb.creation_date,
lb.tags,
CASE
WHEN tmb.management_status IS NULL
THEN 'DRIFT: Unmanaged S3 bucket detected'
ELSE 'MANAGED: Terraform controlled'
END as drift_status,
'aws_s3_bucket' as resource_type
FROM live_buckets lb
LEFT JOIN terraform_managed_buckets tmb ON lb.bucket_name = tmb.bucket_name
WHERE lb.tags->>'Environment' = 'production';
-- Detect EC2 instances with configuration drift
CREATE VIEW terraform_drift_ec2_instances AS
WITH terraform_instances AS (
SELECT
attributes_json->>'id' as instance_id,
attributes_json->>'instance_type' as tf_instance_type,
attributes_json->>'vpc_security_group_ids' as tf_security_groups
FROM terraform_resource
WHERE type = 'aws_instance'
),
live_instances AS (
SELECT
instance_id,
instance_type,
security_groups,
tags
FROM aws_ec2_instance
WHERE state = 'running'
)
SELECT
li.instance_id,
li.instance_type as current_type,
ti.tf_instance_type as terraform_type,
CASE
WHEN ti.instance_id IS NULL
THEN 'DRIFT: Instance not in Terraform state'
WHEN li.instance_type != ti.tf_instance_type
THEN 'DRIFT: Instance type changed'
ELSE 'COMPLIANT: Matches Terraform'
END as drift_status
FROM live_instances li
LEFT JOIN terraform_instances ti ON li.instance_id = ti.instance_id
WHERE li.tags->>'Environment' = 'production';
Scenario 3: Multi-Cloud Security Posture Assessment
Challenge: Maintaining consistent security posture across AWS, Azure, and GCP environments.
Solution: Unified security assessment across all cloud providers with standardized scoring.
-- Multi-cloud security score calculation
CREATE VIEW multi_cloud_security_posture AS
WITH security_metrics AS (
-- AWS Security Metrics
SELECT
'aws' as cloud_provider,
'encryption' as security_domain,
COUNT(*) as total_resources,
COUNT(*) FILTER (WHERE encrypted = true) as compliant_resources,
ROUND(
(COUNT(*) FILTER (WHERE encrypted = true)::float / COUNT(*)) * 100, 2
) as compliance_percentage
FROM aws_ebs_volume
WHERE tags->>'Environment' = 'production'
UNION ALL
SELECT
'aws' as cloud_provider,
'public_access' as security_domain,
COUNT(*) as total_resources,
COUNT(*) FILTER (WHERE acl NOT LIKE '%public%') as compliant_resources,
ROUND(
(COUNT(*) FILTER (WHERE acl NOT LIKE '%public%')::float / COUNT(*)) * 100, 2
) as compliance_percentage
FROM aws_s3_bucket
UNION ALL
-- Azure Security Metrics
SELECT
'azure' as cloud_provider,
'encryption' as security_domain,
COUNT(*) as total_resources,
COUNT(*) FILTER (WHERE encryption_at_rest_enabled = true) as compliant_resources,
ROUND(
(COUNT(*) FILTER (WHERE encryption_at_rest_enabled = true)::float / COUNT(*)) * 100, 2
) as compliance_percentage
FROM azure_storage_account
WHERE tags->>'Environment' = 'production'
UNION ALL
-- GCP Security Metrics
SELECT
'gcp' as cloud_provider,
'public_access' as security_domain,
COUNT(*) as total_resources,
COUNT(*) FILTER (WHERE public_access_prevention = 'enforced') as compliant_resources,
ROUND(
(COUNT(*) FILTER (WHERE public_access_prevention = 'enforced')::float / COUNT(*)) * 100, 2
) as compliance_percentage
FROM gcp_storage_bucket
WHERE labels->>'environment' = 'production'
)
SELECT
cloud_provider,
security_domain,
total_resources,
compliant_resources,
compliance_percentage,
CASE
WHEN compliance_percentage >= 95 THEN 'EXCELLENT'
WHEN compliance_percentage >= 85 THEN 'GOOD'
WHEN compliance_percentage >= 70 THEN 'NEEDS_IMPROVEMENT'
ELSE 'CRITICAL'
END as security_grade,
NOW() as assessment_timestamp
FROM security_metrics
ORDER BY cloud_provider, security_domain;
Executive Dashboard Query:
-- Executive security dashboard - high-level metrics
SELECT
cloud_provider,
AVG(compliance_percentage) as overall_security_score,
COUNT(*) as assessed_domains,
COUNT(*) FILTER (WHERE security_grade = 'CRITICAL') as critical_issues,
COUNT(*) FILTER (WHERE security_grade = 'NEEDS_IMPROVEMENT') as improvement_areas
FROM multi_cloud_security_posture
GROUP BY cloud_provider
ORDER BY overall_security_score DESC;
Building Your Own Steampipe Dashboard
Want to turn your queries into something that looks professional? Steampipe has a dashboard feature that’s surprisingly slick:
Create a Dashboard Config
# dashboard.sp
dashboard "cloud_security_overview" {
title = "Cloud Security Overview (AKA: How Bad Is It?)"
container {
chart {
title = "Public Resources by Type"
sql = <<-EOT
SELECT
'EC2' as resource_type,
COUNT(*) as count
FROM aws_ec2_instance
WHERE state = 'running'
AND public_ip_address IS NOT NULL
UNION ALL
SELECT
'RDS' as resource_type,
COUNT(*) as count
FROM aws_rds_db_instance
WHERE publicly_accessible = true
UNION ALL
SELECT
'S3' as resource_type,
COUNT(*) as count
FROM aws_s3_bucket
WHERE acl LIKE '%public%'
EOT
type = "column"
}
}
container {
table {
title = "High-Risk Resources"
sql = <<-EOT
SELECT
resource_type,
resource_id,
risk_level,
description
FROM (
SELECT
'EC2' as resource_type,
instance_id as resource_id,
'HIGH' as risk_level,
'Public instance with SSH access' as description
FROM aws_ec2_instance e
JOIN aws_ec2_security_group sg ON sg.group_id = ANY(e.security_groups)
WHERE e.public_ip_address IS NOT NULL
AND sg.ip_permissions::text LIKE '%"FromPort": 22%'
AND sg.ip_permissions::text LIKE '%"0.0.0.0/0"%'
) as high_risk_resources
ORDER BY risk_level DESC, resource_type
EOT
}
}
}
Launch Your Dashboard
steampipe dashboard --dashboard-listen=local
Navigate to http://localhost:9194
and watch your security violations come to life in beautiful charts and tables. It’s like a real-time disaster movie, but for your infrastructure.
Pro Tips for Steampipe Mastery
1. Create Reusable Views
Instead of writing complex queries repeatedly, create views:
-- Save this in ~/.steampipe/config/views.sql
CREATE VIEW my_public_resources AS
SELECT
'EC2' as type,
instance_id as id,
region,
public_ip_address as public_ip
FROM aws_ec2_instance
WHERE public_ip_address IS NOT NULL
UNION ALL
SELECT
'RDS' as type,
db_instance_identifier as id,
region,
endpoint_address as public_ip
FROM aws_rds_db_instance
WHERE publicly_accessible = true;
2. Automate with Scripts
Create bash scripts that run your queries and send alerts:
#!/bin/bash
# security_check.sh
# Check for new public resources
NEW_PUBLIC=$(steampipe query --output json "
SELECT COUNT(*) as count
FROM my_public_resources
WHERE created_time > NOW() - INTERVAL '24 hours'
" | jq -r '.[0].count')
if [ "$NEW_PUBLIC" -gt 0 ]; then
echo "WARNING: $NEW_PUBLIC new public resources detected!"
# Send to Slack, email, or whatever notification system you use
fi
3. Version Control Your Queries
Keep your CIEM rules in git repos:
my-steampipe-rules/
├── queries/
│ ├── compliance/
│ │ ├── pci-dss.sql
│ │ ├── soc2.sql
│ │ └── gdpr.sql
│ ├── security/
│ │ ├── public-resources.sql
│ │ ├── overprivileged-users.sql
│ │ └── crypto-mining.sql
│ └── cost/
│ ├── unused-resources.sql
│ └── expensive-resources.sql
├── dashboards/
│ └── security-overview.sp
└── scripts/
└── daily-security-check.sh
Real-World War Stories: When Steampipe Saves the Day
The Case of the Mysterious Bitcoin Miner
A few months ago, an AWS bill came in that was 300% higher than normal. Using Steampipe, we quickly identified:
-- The query that found the culprit
SELECT
instance_id,
instance_type,
launch_time,
region,
tags,
ROUND(EXTRACT(epoch FROM (NOW() - launch_time))/3600) as hours_running
FROM aws_ec2_instance
WHERE state = 'running'
AND instance_type IN ('c5.4xlarge', 'c5.9xlarge', 'c5.18xlarge')
AND launch_time > '2024-01-01'
AND (tags->>'Name' IS NULL OR tags->>'Name' = '')
ORDER BY hours_running DESC;
Result: Three c5.4xlarge instances running for 2 weeks straight with no tags, no monitoring, and suspiciously high CPU usage. Someone had left their mining operation running in the dev account. Oops.
The Great S3 Bucket Leak of 2024
During a security audit, we needed to quickly assess data exposure across 47 AWS accounts:
-- The query that prevented a major incident
SELECT
b.name,
b.region,
CASE
WHEN b.policy_std::text LIKE '%"Principal": "*"%' THEN 'CRITICAL: Public read access'
WHEN b.acl = 'public-read' THEN 'HIGH: Public ACL'
WHEN b.acl = 'public-read-write' THEN 'CRITICAL: Public write access'
ELSE 'Unknown exposure'
END as risk_level,
b.creation_date
FROM aws_s3_bucket b
WHERE b.name LIKE '%prod%'
OR b.name LIKE '%customer%'
OR b.name LIKE '%backup%'
ORDER BY risk_level DESC;
We found 12 production buckets with public read access containing customer data. Thanks to Steampipe, we fixed them all within an hour instead of the weeks it would have taken manually.
The Future of Cloud Chaos Management
Steampipe represents something beautiful: the democratization of cloud security. You don’t need expensive enterprise tools or a team of specialists to understand what’s happening in your cloud infrastructure. You just need SQL skills and a willingness to dig into the chaos.
As cloud environments continue to grow more complex, tools like Steampipe become essential. They give you the power to ask questions like:
- “Show me all resources created by that contractor who left last month”
- “Which of my databases are accessible from the internet?”
- “How much am I spending on resources that nobody’s using?”
- “Who has the ability to delete my entire infrastructure?”
These aren’t just technical questions - they’re business-critical questions that can save you money, prevent breaches, and help you sleep better at night.
Getting Started: Your Cloud Security Journey Begins
Ready to start your own cloud archaeology expedition? Here’s your homework:
- Install Steampipe and at least one cloud plugin
- Run the basic queries in this post against your infrastructure
- Be prepared to be horrified by what you find
- Create your first custom CIEM rule for something specific to your environment
- Set up automated monitoring for your most critical security rules
- Share your findings with your team (they’ll thank you later)
Remember: The goal isn’t to achieve perfect cloud security (that’s impossible). The goal is to understand your chaos well enough to manage it effectively.
And hey, if you find any Bitcoin miners running in your infrastructure, at least now you know how to find them quickly. 😄
Final Words: Embrace the Chaos
Cloud infrastructure is inherently chaotic. Resources get created, forgotten, modified, and abandoned. Permissions accumulate like digital dust. Security groups multiply like rabbits.
But with Steampipe, you can at least understand your chaos. You can query it, visualize it, and maybe even tame it a little bit.
So go forth, run some queries, and discover what digital mysteries lurk in your cloud accounts. Just don’t blame me when you find out someone’s been mining Dogecoin on your production clusters.
Happy querying, and may your SQL be forever syntactically correct! 🚀
Got interesting Steampipe queries or war stories? Hit me up - I’m always looking for new ways to abuse SQL for cloud security purposes.