A website for the M1racles M1 Apple chip flaw discovered by independent researcher Hector Martin. Some in the security research community are concerned that over marketing of vulnerability disclosures are misleading the public about their true impact.
Earlier this week, a well-respected security researcher released new details on a hardware flaw in a brand new processor chip made by Apple. It would allow for two applications running on the operating system to covertly communicate and exchange data. It can be exploited regardless of user status or account privileges. Worst of all, it’s built into Apple’s M1 chip design, meaning that it cannot be patched or fixed without new redesign.
Oh, and one more thing: it’s not really a threat to you or your organization in any meaningful sense.
“M1racles” is a real flaw discovered by independent security researcher Hector Martin with a real CVE, and a website he created for the disclosure offers all the underlying technical details and proofs one can expect in a typical vulnerability disclosure. But despite his breathless description in the introduction, a “Frequently Asked Questions” section further down makes it clear he doesn’t think that businesses, or individuals, or anyone should really be too worried about it.
“Really, nobody’s going to actually find a nefarious use for this flaw in practical circumstances,” Martin wrote in one section. “Besides, there are already a million side channels you can use for cooperative cross-process communication and…covert channels are completely useless unless your system is already compromised.”
It might seem odd then that Martin took the time to develop a splashy webpage, come up with a catchy name and record a video demonstration for a bug he doesn’t consider troubling, but it directly relates to a larger point he is trying to make about the way vulnerability disclosures are marketed to the media and public.
In other words: “Just because it has a flashy website or makes the news doesn’t mean you need to care.”
“Very often there’s a deep disconnect between the practical impact of a vulnerability and how it’s marketed, and the media cycle that ends up growing around it,” Martin told SC Media in an interview. “Sometimes you get [disclosures] that are just, I promise you, completely and utterly useless and it becomes this giant media cycle.”
There are a variety of factors that can lead to a low-impact vulnerability being reported as an immediate, urgent threat to IT and security practitioners. Researchers may genuinely disagree on the severity, or they may not have final say over how their work is marketed by their company. They may have unconscious biases that lead them to inflate the importance of a bug they found, or neglect to include details or context that serve to downplay the impact. Sometimes, journalists or consumers may only read the top few paragraphs and fail to fully understand or explore the implications of the underlying technical research.
Beyond the FAQ, Martin did his best to drop hints throughout the website that this wasn’t quite the threat the summary makes it out to be: a link at the top of the page titled “Should you be worried? Probably not” takes you to straight to the section where the flaw’s actual impact is described in far more sober and less sensationalistic tones. Still, he noted with amusement that some news outlets had actually covered M1racles as a straightforward vulnerability that the public needed to know about – essentially proving his point about the way some flaws are misleadingly framed to the public.
To be clear, while he wants journalists to fully absorb the research they’re reporting on and talk to other researchers outside of the discovering organization about impact, Martin believes security researchers have an obligation to describe the vulnerabilities they find honestly to less technical audiences, and to provide any important context that could head off FUD – an industry term for “fear, uncertainty and doubt.”
“I don’t know to what extent it’s deliberate, to what extent it’s negligence, but the information security community…is actually doing a pretty bad job of sort of explaining these things to lay people, to people in the media, to people who are going to be covering this,” he said. “It’s very, very easy to overhype something or just neglect to talk about the parts that mitigate the [flaw], and that was kind of my thought” when creating the website.
How to best communicate or market vulnerabilities to the public is a frequent topic of debate in the information security community. In particular, practices like devising snappy looking custom websites and catchy names for new campaigns or flaws are not a new phenomenon (bugs like Heartbleed were getting this treatment as far back as 2014) but the tactics do raise questions about whether the goal is to scare or accurately inform the public.
On the one hand, it can help researchers and companies stand out in a crowded vulnerability reporting environment. On the other, it can also be leveraged to leave readers with the impression that the flaw is more impactful than in reality.
“Optimistically, naming and marketing a serious vulnerability or exploit is a good thing. It gets attention and it makes it easier for researchers to discuss [and] it moves people to develop and install patches or otherwise remediate,” said Brian Donohue, a senior security specialist at threat intelligence firm Red Canary, in an email. “However, the non-serious exploits on the self-promotional side of the spectrum muddy the waters by making it hard to differentiate between marketing hype and serious business.”
Donohue said individual researchers and research consortiums often have substantial control over the way their work is framed or presented to the public. When the impacted vendor is involved in the disclosure, things become more “complicated” and researcher input might lose out to other stakeholders.
While media and consumers should train their minds to treat named and overmarketed disclosures with the same scrutiny they bring to any other reported vulnerability, Donohue said some researchers and companies can be too close to their own work in a way that can color their perspective.
“Researchers spend a great deal of time discovering these vulnerabilities, developing proof-of-concept exploits for them, and explaining how they work in blogs and to their colleagues. They’re also justifiably proud of their work,” said Donohue. “All of these factors can create a sort of echo-chamber effect where the folks who discovered the exploit become biased and may lose sight of the big picture and overestimate the importance or severity of their work.”