As part of moving from few on-premise monoliths to multiple on-premise microservices, I'm trying to improve the situation where database passwords and other credentials are stored in configuration files in /etc.
Regardless of the the technology used, the consumer of the secrets needs to authenticate with a secret store somehow. How is this initial secret-consumer-authenication trust established?
It seems we have a chicken-and-egg problem. In order to get credentials from a server, we need to have a
/etc/secretCredentials.yaml file with a cert, token or password. Then I (almost) might as well stick to the configuration files as today.
If I wanted to use something like HashiCorp Vault (which seems to be the market leader) for this, there is a Secure Introduction of Vault Clients article. It outlines three methods:
When looking at the various Vault Auth Methods available to the Vault Agent, they all look like they boil down to having a cert, token or password stored locally. Vault's AppRole Pull Authentication article describes the challenge perfectly, but then doesn't describe how the app gets the
The only thing I can think of is IP address. But our servers are all running in the same virtualization environment, and so today, they all have random IP addresses from the same DHCP pool, making it hard to create ACLs based on IP address. We could change that. But even then, is request IP address/subnet sufficiently safe to use as a valid credential?
We can't be the first in the universe to hit this.
Are there alternatives to having a
/etc/secretCredentials.yaml file or ACLs based on IP address, or is that the best we can do?
What is the relevant terminology and what are the best-practices, so we don't invent our own (insecure) solution?
(Reposting as answer instead of a comment)
It seems to me that the "Trusted Orchestrator" model bears further consideration. The Vault docs mention Terraform and Chef, but the role of an orchestrator is more general than those particuar tools. In almost any microservice environment, there will be a system that is responsible for creating and provisioning new application instances (containers, VMs, what have you.) This system is ipso facto the "orchestrator," and is therefore capable of providing credentials. E.g. (as mentioned in your comment) it could SSH into the instance immediately after creation and write a temporary Vault token which the application could then use to discover the rest of its credentials. Alternatively it could provide the token as an environment variable (this is common practice in container-based setups) or even via a network service a la AWS (which makes it easy to rotate periodically).
In point of fact, I think the "Platform Integration" model resolves to the "Trusted Orchestrator" model once you examine it. The difference is just that the orchestrator is managed by the platform operator (AWS, Azure, etc) rather than the application owner.
External links referenced by this document: