CONTACT US
Download our free guide to find hidden attackers.

Find hidden attackers with Open NDR

SEE HOW

volt-typhoon-warning

Detect advanced attacks with Corelight

SEE HOW

cloud-network

Corelight announces cloud enrichment for AWS, GCP, and Azure

READ MORE

partner-icon-green

Corelight's partner program

BECOME A PARTNER

glossary-icon

10 Considerations for Implementing an XDR Strategy

READ NOW

ad-images-nav_0006_Blog

Don't trust. Verify with evidence

READ BLOG

video

The Power of Open-Source Tools for Network Detection and Response

WATCH THE WEBCAST

ad-nav-ESG

The Evolving Role of NDR

DOWNLOAD THE REPORT

ad-images-nav_0006_Blog

Detecting 5 Current APTs without heavy lifting

READ BLOG

g2-medal-best-support-spring-2024

Network Detection and Response

SUPPORT OVERVIEW

 

Dispatch from Black Hat Asia 2025: To err is (still) human

Black Hat Asia 2025 has come and gone, and it was another whirlwind of a conference. Thank you to our partners—Arista, Cisco, MyRepublic, and Palo Alto Networks—for making it a successful conference! It’s an exhilarating experience: sitting in the darkened NOC, listening to the beat of electronica, and hunting through the excellent network evidence we gathered using our Corelight Open NDR monitoring stack. There’s always something interesting to find!

One thing never seems to change: No matter how much everyone pushes to use good encryption, to enable encryption by default, and to push for more privacy preserving measures, getting to 100% coverage has proven to be an elusive goal. There are still apps with insecure defaults despite the best of intentions, developers who make mistakes (whether from lack of experience or from just having an “oopsie” kind of day), things that haven’t gotten updated to take advantage of new versions, and more.

Broadcast your location? Gladly.

One thing we often see on a large network like this is an app or two that reports user location data back to a server, forgetting (or neglecting) to use an encrypted connection. We saw it again in Singapore, where a handful of users had an app installed on their iOS devices that checked for weather information, and in the process reported exact location information over HTTP. It’s certainly not the end of the world, but it’s definitely not best practice.

Also, I don’t think you need a weather app in Singapore. Oversimplifying just a bit, the forecast day-in-day-out is: sunny, high of 99, 100% humidity, and a chance of rain. (I am, of course, being cheeky; these apps were probably installed long before the users decided to come to Black Hat Asia.)

Of perhaps more interest, this app also reports the users’ phone’s operating system exact version in the clear. This is more concerning, because if a user is running an outdated and vulnerable version of their phone operating system, a passive observer could glean this information and use it to target them with an exploit.

Developers in this day and age should always use TLS-encrypted sessions for transporting information between servers and end-user devices in order to protect against snooping and, more importantly, tampering of communications.

The dangers of self-hosting

Self-hosting applications, even pre-packaged ones, is common in the technology world for a number of reasons. Some people or businesses do it to keep costs down. Others do it for privacy reasons, to keep their data under their own control. Some even do it for the challenge and the learning experience.

One lesson that is often learned the hard way with self-hosting software is that you are responsible for everything, including the security of the stack. Many web applications that can be self-hosted default to an HTTP server. This is presumably to avoid the browser errors that come with having a self-signed certificate, and the “my browser warned me you were bad” tickets that users will file on the project’s issue tracker. However, this means that it is up to the person self-hosting the software to recognize that they need to procure a certificate for the application, and either manually enable HTTPS (if the app supports it natively) or place the application behind a reverse proxy that supports HTTPS, of which there are too many to mention.

We often see these at the conferences, with stacks that instructors stand up for classroom activities. We assume that since those stacks, and all the resultant secrets and data, are both ephemeral and contrived, that the instructors simply don’t care if the students’ interactions with them are encrypted or not. Consensus in the NOC is: That’s fine. It doesn’t necessarily set a good example, but it’s not immediately harmful.

However, we often see attendees accidentally self-hosting applications in the clear, as well. For example, at Black Hat Europe 2023 we observed an attendee synchronizing photos from their phone to a Synology NAS, and at Black Hat Europe 2024 we similarly saw a couple of attendees interacting with self-hosted NextCloud and Guacamole servers.

This is why we weren’t surprised this year at Black Hat Asia when we noticed logins to various platforms over HTTP. The next step is to figure out what category these belong to: classrooms, broken things, or mistakes.

And a 5, 6, 7, 8…

Our first example was a pretty clear leak of username and password in the logon. Interestingly, this application is hosted on port 5678.

By looking at the server headers we can get a clue as to the identity of the application.

It turns out this application is n8n, a self-hostable low-code/no-code automation platform with an emphasis on AI-assisted operations.

Forgetting to use SSL/TLS is such an easy mistake to make, especially when (as is the case here) the application default is to go without it. At least they do have a section in their documentation about how to enable TLS, and how to host this application behind a reverse proxy, which terminates TLS for the application. However, having it be enabled by default would have prevented this exposure.

That’s a mighty high port you’ve got there

Another such example of a user exposing their credentials at the conference included this login:

The user’s email address and password were exposed completely. Again, the server headers give a clue as to what the server was: a self-hosted GitLab instance.

Since the email address belongs to a corporate domain, we believe this is likely a company resource. It is unfortunate that any time an employee logs into this server from outside the corporate network, they expose everything necessary for a passive observer on the network to gain access to that GitLab instance and do anything the credentials allow, such as download stored software, commit changes, or perhaps even delete repositories. Oops! Furthermore, this organization could be otherwise very well secured, but because of one employee’s mistake exposing a self-hosted GitLab server to the world, a lot of the other security controls are invalidated. What if those credentials are re-used across other applications, as well?

Please call me back on a secured line

In one case, we happened to notice that an application was accessing a server over HTTP, but it was getting a 302 “Moved Temporarily” response from the server redirecting the app back to HTTPS. However, since the client forwarded all its headers, including session information and cookies, when making the initial request the damage of the leak was already done. It is interesting to note that we didn’t see a lot of these requests, but we did see other SSL/TLS traffic to the server, so it seems like this app is generally well-behaved, just there is some functionality inside the app which causes this errant request to an HTTP endpoint and leaks the user’s session information.

How is this still happening?

I think we often imagine technology is infallible. It is generally deterministic, it doesn’t need to sleep, and it works most of the time. However, technology and software are still made by people, and people can make mistakes. That one subroutine in the app that accidentally calls an HTTP endpoint instead of an HTTPS endpoint? It could have been handed off to an inexperienced intern, written by the developer on a day they were recovering from a really late party, or maybe the developer had some issues with self-signed certificates in the development environment and just switched to HTTP, said, “I’ll fix it when we release to production…” and forgot. Those self-hosted applications without TLS encryption? It could have been someone who doesn’t understand the privacy concerns and leakage, or it could have been that they intended to do it later and just haven’t gotten time. For whatever reason that mistakes happen, they still happen, even in 2025, so they’re worth watching for.

So…what should I do about it?

First, for all of these cases, the Black Hat team will attempt responsible disclosure to the attendees and/or the application developers, because it’s probable that they don’t know they have this issue. By bringing it to their attention, those maintainers will have the opportunity to take action to make things better.

Second, if you’re self-hosting a web application, make sure that it is running with SSL/TLS encryption enabled, either natively from the application, or by placing it behind a reverse proxy that supports TLS. More importantly, turn off the HTTP listener entirely. It used to be common practice to listen for unencrypted HTTP connections and then send a redirect to the client to the secured version of the URL, but since most browsers nowadays prefer HTTPS and will attempt to connect via HTTPS first, it’s simply a better practice to only allow HTTPS connections. That way, clients won’t be able to accidentally spill their secrets such as session cookies while attempting to connect to the server and request a resource.

Finally, if you have a network, monitor it. By watching your network traffic, you can easily see if you have applications exposed to the Internet without encryption, and can tell which users’ passwords have been exposed so you can rotate them. You can also tell if any of your users are using applications which may have flaws that expose things that should otherwise be secret, and then take action to notify your users, adjust policies, and maybe even notify the application developer so they can make things better for everyone! That’s not even mentioning the main use case for monitoring your network traffic: You can have the evidence you need to close trouble tickets, perform robust incident response, and to answer questions like “have any other computers talked to that compromised server which I may have missed?”; “how many users are using this application we want to shut down?”; and, “I thought we decommissioned that mainframe years ago, why is it still online?”

We saw these things (and more Black Hat Asia 2025 NOC, Hunting at Black Hat Asia 2025) from watching the Black Hat conference network, with some of the best technologists and hackers visiting. What risks might be lurking on your network?

Recent Posts