Personal news reader NewsBlur being down for several hours last week following a data exposure. (“newsblur start” by renaissancechambara is licensed under CC BY 2.0)
Turns out the recent story about the personal news reader NewsBlur being down for several hours last week following a data exposure has a happy ending: the owner retained an original copy of the database that was compromised and restored the service in 10 hours.
The actual database exposure was caused by a persistent problem with Docker, an issue that’s been fairly well-known in the Linux community for several years. Evidently, when sys admins use Docker to containerize a database on a Linux server, Docker inserts an “allow rule” into iptables, opening up the database to the public internet. This requires sys admins to reconfigure the uncomplicated firewall (UFW) configuration file on the server and insert new rules for Docker.
In the NewsBlur case, the database (about 250 gigabytes) was compromised and ransomed while the RSS reader was using Docker to containerize a MongoDB. NewsBlur founder Samuel Clay said in a blog post that the hacker was able to copy the database and delete the original in about three hours.
Clay used some best practices security pros should consider: right before he transitioned to Docker, he shut down the original primary MongoDB cluster, which was untouched during the attack. Once he was aware of the issue, Clay then took the snapshot of the original MongoDB, restored the database, patched the Docker flaw, and got the service up and running.
It’s examples like the NewsBlur case that demonstrate why container security has become one of the fastest growing areas in cybersecurity, said Ray Kelly, principal security engineer at WhiteHat Security. Kelly said companies are moving at a frantic pace to move to containerized architectures to help with scalability and redundancy – and they often don’t consider the security implications.
“Each container is essentially its own mini OS and needs to be vetted for security vulnerabilities such as privileges/roles, application layer attacks and in this case network and firewall openings,” Kelly explained. “Ideally these tasks are done before going live. Unfortunately, security often gets placed in the back of the queue in favor of new features and fast deployments.”
Andrew Barratt, managing principal, solutions and investigations at Coalfire, added that as the architecture of our systems becomes more complex, with more automation, containerization, and virtualization, it’s vital to not make any assumptions about the security configurations.
“Active testing of these would have caught this early if done by an experienced adversary simulation or most likely even a more traditional architectural review,” Barratt said. “Sadly, the tables got dropped and the data was no more. At least in this instance, there was no need to pay the ransom.”