| Follow @lancinimarco

Staying on top of ephemeral environments is a challenge many organizations face. This blog post describes the process we undertook at Thought Machine, a cloud-native company with environments spanning across multiple cloud providers, to identify a solution able to detect, identify, categorize, and visualize all the cloud assets being deployed in an organization.

Starting from what the challenges posed by ephemeral environments are, and continuing with what Cartography is, why it is useful, the high-level design of its deployment, and concluding on how to consume the data it collects. We are also going to open source a set of custom queries and tooling we created to simplify data consumption.

Let’s start by defining what problem we are going to tackle and how Cartography can help with it.

Ever been in the situation where you have dozens (or even more) of AWS accounts and/or GCP projects, and are in need to run a security tool against your entire estate?

In this post I’ll try to summarize what cloud resources (like roles and users) are needed, and how to define them in a manner that safely allows to perform a security audit across a fleet of AWS accounts/GCP projects.

This post is Part 2 of the “Offensive Infrastructure with Modern Technologies” series, and is going to focus on an automated deployment of the HashiCorp stack (i.e., the HashiStack).

Part 1 explained how to configure Consul in both single and multi node deployments using docker-compose, while here I’m going to provide a step-by-step walkthrough that will allow you to automatically deploy the full stack with Ansible.