Chunks of Disney’s internal Slack messaging are being released to the public via BitTorrent, and surprisingly the Russia-based hacking group behind it does not seem interested in a ransom. Rather, the group is calling the data leak a form of “hacktivisim” meant to protest Disney’s adoption of AI-generated art for some aspects of its business.
The hackers call themselves “NullBulge” and are a relatively new group that first became known to security researchers in May. While the group actively seeks to steal credentials, mostly via malware-injected downloads of otherwise legitimate software, it says that it only attacks targets that commit “sins.”
Contents of 10,000 internal Disney Slack messaging channels purportedly captured by hackers
According to its public presence, NullBulge has declared war on AI artwork and cryptocurrency in all forms. It is not unusual for hackers to target Slack messaging channels due to their relatively weak security and likelihood of containing sensitive and valuable information, particularly with a company the size of Disney (which reportedly had about 10,000 such internal channels breached). But it is very unusual for such a major breach to be dumped to the public immediately and with no consideration of money, though Disney has not yet commented on whether or not it has been approached for a ransom privately.
Disney has yet to publicly confirm the data leak, and it may well be that the supposed “hacktivists” are actually trying to negotiate a payment behind the scenes. Thus far they have only dumped a portion of what they claim to be about 1.1 TB of stolen information and could be quietly seeking a payment for withholding the rest of it.
Time will tell what the hackers’ motivations actually are, as this is their first major data leak. The group previously took credit for a compromise of a popular Stable Diffusion interface tool, and says that they hacked a Disney employee via malware injected into a mod for a video game.
NullBulge may have initiated data leak by extorting Disney employee
Piecing together different statements by the group on both the dark web and “clearnet” social media, it looks as if the hackers convinced a Disney employee to open the door to the company’s Slack messaging for them via a compromised video game mod that got them into the employee’s personal device. The group has said that an “inside man” let them in but got “cold feet” after they had spent some time exfiltrating data; the hackers then doxxed the employee and dumped the contents of his personal 1Password vault to the web in retaliation.
In spite of this rather underhanded approach, the group postures as being Robin Hood figures that are doing it to protect artist jobs and paychecks. Disney has met with some popular backlash regarding generative AI since late 2023, when it was first spotted using it in promotional materials. It has since escalated to using it to generate the credits to one of its Marvel television series, and there have been other internal leaks indicating the company has a task force dedicated to cutting costs via AI deployment in its creative areas.
The chunk of the data leak that has already been made available does at least look like it is legitimate. Security researchers have thus far found discussion of and materials from upcoming projects that have not yet been made public, login credentials, code and information about the development of different company websites, and job applications among other potentially damaging items. While it is not necessarily irresponsible to have some of this material on Slack messaging for inter-company sharing, the incident raises serious questions about how one employee account was seemingly able to freely traverse all these different areas.
Organizations need tools like Slack to function, but what can they do to prevent similar data leaks? This incident serves as the umpteenth reminder that multi-factor authentication should be enforced, along with segmenting of potentially sensitive materials to private Slack messaging channels with limited “need-to-know” employee access. It also demonstrates that more attention may need to be paid to how API keys are handled and whether or not logging is adequate to detect anomalous access in a timely manner.