Securing services within a hybrid, private cloud
It seems to me there is an inverse relationship between security and functionality. To gain functionality in my hybrid/virtual cloud environment, I have to sacrifice security. Or do I?
To be honest, I’m not entirely certain yet. I keep running into problems I need to solve within my cloud that I can’t easily address. Case in point: keeping configuration files in sync.
In order to keep a configuration file in sync, I have to provide some mechanism to overwrite a local file with the contents of a remote file (either one that’s downloaded for that purpose, or comes as the body of a RabbitMQ message). Even though our user pool is limited to only employees of the company, that still exposes (at least theoretically) a vulnerability in that a user with access to our LAN could manipulate the file over-writing process to inject their own script onto a cloud-based machine. To combat this, I’m only exposing limited files for updating based on a key value (rather than the full, real path). Even if a malicious user was able to force a config file to be updated with the contents of their script, it would only be of a limited number of files, and those files would not have their direct paths exposed because the updater takes a key value which maps to the real local path inside the update consumer and this map is never exposed to users.
Assuming this malicious user was able to get their script on a server, they’d still have to invoke it somehow. Ensuring that my config file updater, which is meant to only update configuration files, writes their updates to the filesystem with the executable bit turned off, the script would not be executable.
After a configuration file has been updated, it’s very likely a service will need to be restarted. Exposing some functionality to restart services also gives me pause because I’ll be running commands in the shell at the behest of a message which I can’t really be certain isn’t coming from a malicious user. By using the key-based method previously described, though, I can severly limit the number of commands that might potentially be run. Exposing only the key value means the full path to the script is not derived from the data coming from the message but from data already living on the machine which is executing the command. If a malicious user was able to manipulate this side of the system, they’d still only be able to invoke restarts on specific services.
To fully exploit the system, a malicious user would have to inject their own configuration into a service (maybe by putting in Proxy directives to an Apache configuration file or something) and invoke a restart of the service. To do this, they’d need to:
- Intercept a message containing the key of the configuration file to inject their own configuration into.
- Intercept a message containing the key of the service that needs restarting after injecting their own configuration.
- Send the appropriately-crafted message through the message broker which is secured with a username and password.
We use an internal firewall here to segregate our AS/400 from the rest of the LAN for PCI compliance reasons. I suspect we could do the same thing for our cloud so that a malicious user would first have to gain access to a machine behind the internal firewall before it could contact the RabbitMQ server to even exploit these services.
I honestly don’t know what security best-practices are going to look like for hybrid/virtual private clouds. Thankfully, I’m not the only one asking these questions. As the cloud environment commands a greater share in the market, I’m sure people who are much smarter than I am can offer more practical solutions than a simple firewall and username/password security plan.