Are you a data exhibitionist? why you should protect your sensitive ports


You rarely set out to be a data exhibitionist. It happens surreptitiously. Then one day, your databases end up on Shodan, naked and exposed

By Merlin Carter

You rarely set out to be a data exhibitionist. It happens surreptitiously. Someone puts an internal tool in the cloud. A project is quickly hacked together and then used in production. Before you know it, you’re exposing database ports to anyone sniffing around. And believe me, there are hordes of predators prowling for ports.

Take the (fairly) recent “meow” attack during the week of July 20–26. Perhaps you’ve heard about it?

Ongoing Meow attack has nuked >1,000 databases without telling anyone why

…was the headline that ran in Ars Technica on July 23 of that week.

“More than 1,000 unsecured databases so far have been permanently deleted in an ongoing attack that leaves the word “meow” as its only calling card, according to Internet searches over the past day.”

But at the end of the week, that number was nearing 4,000 databases. At the time of writing, there’s still no news on the motivation behind the attack.

Perhaps this person simply wanted to chastise all the careless database users out there.

If that’s true, it’s a pretty painful way to do it.

Now, I should mention here that, according to Bleeping Computer, 97% of the databases were ElasticSearch and MongoDB, which are typically less secure than older database management systems such as MySQL.

This problem has been known for years. It’s just that people have been very slow to take it seriously. In his 2015 article, “It’s the Data, Stupid!” Shodan founder John Matherly pointed to these vulnerabilities:

“At least with MySQL, PostgreSQL and much of the relational database software the defaults are fairly secure: listen on the local interface only and provide some form of authorization by default. This isn’t the case with some of the newer NoSQL products that started entering mainstream fairly recently. For the purpose of this article I will talk about one of the more popular NoSQL products called MongoDB, though much of what is being said also applies to other software (I’m looking at you Redis).”

If you’ve never heard of Shodan, you should check it out. It’s a user-friendly search engine that lets you find any device or server that’s connected to the internet. You can drill down for more details about a device and inspect all its open ports. It’s a key resource for hackers and penetration testers.

Here’s a fun task for you — just go and sign up for a free Shodan account and then run the following search:

"Set-Cookie: mongo-express=" "200 OK"

There, you will likely find the countless husks of databases that were ravaged by the diabolical kitty-themed attack (every wiped database has the word “meow” appended to its name). To see the gory details, just click the little red arrow icon next to each search result.

Screenshot: List of an unsecured database ravaged by meow
Example of an unsecured database ravaged by meow

By the way, that search query I gave you? It was a simple way to check for unsecured Mongo databases. There are all sorts of other queries tailored to checking for different database and device types.

“OK fine…” you might be thinking… “That’s definitely spooky, but how do I check if MY company is affected by this issue?”

Well, for a practice run, you could just ping your company’s website for the IP address and enter the IP into Shodan. But that’s probably not going to reveal anything interesting. Exposed databases are usually running on another server with a different IP. How do you find them?

Let me explain how we found one in our company.

It was a few years ago when we found this exposed database by accident. It belonged to a business-oriented team. Exactly which team isn’t important, but let’s just say it was the kind of team that doesn’t typically touch any code. They wanted to build a little tool that would exchange data between Pipedrive (CRM) and one of their own internal systems.

Normally, my team (the IT team) would help out with tools like this. But there was a trainee with some technical chops in this other team, so they decided to do it on their own.

The trainee whipped up a basic integration, spun up a Kubernetes cluster in Google Cloud, and installed his new integration there. It worked fine, but he had made some glaring security mistakes.

We discovered this when the trainee moved on to another role (as they generally tend to do). His team wanted to have the Google Cloud costs transferred to the IT team. It turns out that the trainee had been using a private Google Cloud account and paying from his company credit card.

One of our developers sat down and inspected the code. He saw horrible things. Hard-coded production credentials which were then committed to GitHub in a public repository — open for the world to see.

But that’s not all. We determined the IP of the host and searched for it on Shodan. And what did we find? An exposed MySql database.

The database contained a copy of the information that was being pulled out of Pipedrive, so a lot of details about leads and potential deal sizes. Not the kind of data you want a malicious actor to get their hands on.

Now, as I mentioned previously, MySQL does force you to set a password, so people couldn’t just view the data like they could for those exposed MongoDB and Elasticsearch databases.

However, there are vulnerability databases such as CVE details where you can find all the exploits possible for a particular product. Here are the results for MySQL.

Screenshot: A list of CVEs found in MySQL

Any hacker who knows what they’re doing can find an exploit for your version of MySQL and infiltrate your database. The easiest method is a brute force attack, where hackers use an algorithm to guess your MySQL password. The message is clear: you do not want your database to be indexed by Shodan.

How did we fix it? First, we put the tool on a local VM since it was overkill to put it in a Kubernetes cluster. We then changed the credentials and removed all the hard-coded references and cleartext secrets from GitHub. As far as we knew, no damage was done, but we informed the management team nonetheless.

A process problem

So now you have the technical details, but how do you stop this kind of thing in the first place? The fact is, a process problem was what really caused the vulnerability.

You might see it as a case of a team “going rogue”, but I completely understand the impulse to do it yourself. Especially if you work in a corporate behemoth with a nebulous bureaucracy that requires 20 JIRA tickets to get anything done. But that’s not us. What was missing was a simple checkup from one of our developers.

If you have someone on your team who can whip a fancy little tool that adds a lot of value — great! But if you’re going to expose it to the internet, make sure you ask an expert to give it the once over.

For IT teams, it’s important to remind other departments about this from time to time. You can’t automate every aspect of security governance, but you can remind people of the security risks involved whenever you build your own app and put it online (I gave similar advice in an earlier article on third-party apps).

As I wrote back then, it’s important not to come across as a police officer who wants to take away everyone’s tools. You just want to make sure that they’re secure. And when mistakes do happen, it’s important not to punish people.

This principle is sometimes known as “psychological safety” — a concept outlined extensively in the novel “The Unicorn Project” (a 2019 bestseller about developers working to modernize a large auto parts retailer).

The principle is based on the belief that people will hide problems and mistakes (or side-projects) when there is a culture of fear. In contrast, a culture of safety will encourage people to share more if they know they won’t get blamed or shut down.

In our incident, we first conducted a “blameless post-mortem” of the event and tried to reconstruct a timeline of when key events occurred. The most important task is to create a detailed account of the incident. We then educated the person and their team on the seriousness of the issues and gave them precise guidelines on when IT needs to be involved.

In my opinion, any decent employee will punish themselves enough for making these kinds of mistakes, so taking a strict disciplinarian approach is usually counterproductive.

The best approach is to create a culture where people feel safe to come forward and admit they might have caused a security flaw. They’ll also be more likely to do this if you educate employees on how these security holes can open up in the first place. Indeed, we’ve all become very good at educating employees on GDPR and data privacy. It’s time we did the same for network security.

PS: For more technical advice on prevention, check out these tips for securing a MongoDB database. While the exact details may differ, you can use the same advice for other database systems too.