CyberAlarm: Testing the "production version"... and why you should avoid it.
I can't honestly recall the last time I was left seething with rage; I'm not an angry person. That said, I've never been called a liar or been accused of acting in bad faith by the NPCC before... but hey, the increased heart-rate should burn a few much-needed calories.
Following my review of the infamous "development/test build", NPCC published an article stating it's "completely untrue", how it was "a completely different piece of software", how it "has never formed part of the Police CyberAlarm system in ANY live version" and I "stumbled upon" the development link.
Their rebuttal pivots on two issues:
1) I "stumbled upon" the development link, therefore would have known it wasn't CyberAlarm.
2) The production version isn't based on the development "tool" (which doesn't make sense), thus it would "be impossible to find the issues raised".
I've made every effort to protect the force's reputation and the public, even extending the metaphorical olive branch after the first (of two) insulting cease & desist letters, but I'm left with no choice but to publish the entire debacle in an effort to salvage my reputation.
This is likely to be an enormous post, but I will try to keep it to a minimum; it's not intended to be a sedative.
I don't want to spend too long discussing the "development/test build" as I've pretty much covered everything I could responsibly mention in my previous post, but in order to explain exactly what's happened, I need to go back to the beginning.
30th September 2020:
I filled in the "registration" form on the official CyberAlarm website and received an auto-response.
2nd October 2020:
I received an email from Leicestershire Police, apologising for the delay and attaching 4 PDFs which include sample reports, threat summaries etc.
I replied, asking for an installation guide.
I received a reply with the installation guide attached.
I downloaded the installation guide.
... then replied to <redacted>
Here's a snippet of the installation guide, or you can view the offending article yourself from my backup.
Keep in mind the filename of the PDF... ending "310720" which obviously refers to date it was updated. Assuming they've sent me the most recent installation guide, that's the last time it was updated. For 2 months, this has been the document members receive.
You might be wondering why I think it was updated, rather than created on that date, but I'll come back to that.
If you take a look at the "downloadable" URL - it clearly shows "https://www.cyberalarm.police.uk/CyberAlarm.ova" which is the production build.
So, how did I end up with the test version?
The astute amongst you may have already surmised; in some applications, updating a hyperlink merely changes the visible text but leaves the original URL in the background. When you hover over the link, the truth is revealed.
Although the text says "cyberalarm.police.uk", the actual hyperlink takes you to "cyberalarm.org.uk" - housing the dangerous internal build!
Of course, they refute the existence of screenshots too & claim I've never pointed it out, despite two members of staff actually thanking me for raising it! (proof of that later, so as not to confuse the timeline of events).
This is absolutely irrefutable proof that, contrary to NPCC's claim that I stumbled upon it, I simply used the link they provided by email. They go on to claim it would have been obvious that it wasn't a "real" CyberAlarm data collector, having later seen the "production" one.
Spinning up the "test build" VM:
There's the one & only UI. The PDF said "CyberAlarm", the website said "CyberAlarm", the filename is "CyberAlarm.ova" and the UI says it's..... CyberAlarm. But, I'm expected to know it's a demonstrably dangerous, internal build of an "OpView Collector".
The last hour and a half were spent dissecting the image, uncovering an astonishing array of mind-blowingly awful code, disabled security controls, outdated dependencies, insecure endpoints... just about every rookie mistake a developer could make, all lovingly packaged into something they expect the public to run on their internal network. I'll bet Huawei's code isn't as bad as this... but I digress.
Needless to say, I emailed back to say I couldn't recommend this to customers.
Let me be the first to say that without context, that was an unreasonable comment and I wouldn't have expected them to action it without investigating themselves. However, it hopefully does show that whilst the main aim was to protect the public, I was also mindful of the huge reputational damage this would do to member forces and the NPCC.
However, as the scale of their incompetence began to sink in, I realised I couldn't just leave it there and needed to report it formally.
3rd October 2020:
I called <redacted> (from Leicestershire Police) on the mobile number (also redacted) from his email above.
Despite being a rest day, he thankfully took my call and listened as I outlined everything I'd seen thus far. As expected, he asked me to put my findings in an email which I did shortly after.
I won't post the entire screenshot, as it's identical to the previous article.
He replies to say he's discussing the issues with the developers.
An update! That was quick!
So quite apart from being "a completely different piece of software", it is indeed the proof-of-concept/development version which formed the basis of the current CyberAlarm platform.
Pay careful attention to the statement "none of the below codes/passwords/URLs you have mentioned have anything to do with the CyberAlarm system", referring to "P3rv3d3" and "P3rv4d3S0ftw4r3" from my disclosure email.
But, my concerns have apparently prompted a full penetration test... so that's job done, right?
At this point, they appear to be taking it seriously.
I'll snip out the dialog of us finding a suitable time/date for the teams meeting, but it ended up being 5th October 2020 @ 4PM, lasting well over an hour.
5th October 2020:
I dialled in to the meeting with <redacted> ready & waiting. Over the next hour or so, we discuss the outcome of their investigations, why none of it applies to the production version (in their view), how I "found" the URL to the test build and TL;DR - <redacted> uttered the phrase which will likely lead to the cancellation of this platform.
"We don't consider these to be deal breakers..."
I'm not paraphrasing either. That's verbatim.
What?! I was absolutely dumbstruck.
I've no idea if it was an attempt to placate me (not that I was hostile in any way), but <redacted> also asks me to join CDSV (Cyber Digital Specials & Volunteers); the replacement for the CSCV programme which was pulled a while back.
During the meeting, <redacted> generated an "access code" to allow me to download the "production" build and asked me to review it.
To protect myself, I asked him to put his request to review it in writing... which he emailed shortly after. It always pays to be careful... as disclosures can very quickly go sour.
Hang on, rewind! Look at that download URL...
The "installation guide" PDF earlier didn't mention "/download", so if you clicked their address, you get a one-way ticket to network compromise (or the "test build") and if you copy/paste, it 404s.
Download Link Broken: Check.
Are you still here? Wow. Let's take a breather for a moment and discuss where we are so far.
I've proven NPCC literally gave me the offending URL, so let's not go over that again. I've also demonstrated that I followed a fairly typical responsible disclosure process & they acknowledge my attempts to report the issues.
But, I'd like to take a step back to posit a question...
If this is a "2 year old test build" which "is not used anymore", why was it last updated in May 2019? Are we to believe they stopped using the product immediately after development? Of course not. I'd hope it'd be passed to beta testers, a QA team, pen testers etc. Realistically, mindful of how long it takes to launch a product like this in a Police force, it's likely to have remained in use for many months after. I can't prove that, but I think it's a reasonable assumption. Strange too they'd buy a TLS certificate to protect an endpoint they don't use, but perhaps that's to prevent redirect errors. Who knows.
Another question too... who ioncubes their own internal development builds?!
OK. Deep breath... let's dive back in.
Testing the "production build"
It's still October 5th, keep in mind.
I downloaded the installer and immediately opened the bash script, revealing this...
The installer is plain text - easily visible to anyone, as opposed to the actual code which is still encoded.
There's two vulnerabilities, right there! 1024 bit RSA and TLS security checks disabled... on the damn installer! I pointed both of these out in my email about the "test build" but they claim it doesn't apply to production. If an attacker can trick your box into downloading any ol' .tar.gz, you're vulnerable to all manner of exploits, including the aforementioned RCE.
After decoding "index.php" from the file which the installer grabs (insecurely), a similar trend begins to emerge.
TLS security checks, disabled.
Server "secret": "P3rv4d3S0ftw4r3".
You'll note, that's the same insecure "secret" which Pervade claimed didn't exist in the production version.
Okay, okay... that's just the registration script. Is server communication safe?
Yet again, TLS certificate checks are disabled everywhere. This is beyond debate.
Wait a minute...
The CURLOPT_URL parameter is calling an IP. Hmm... do they really have IPs in their certificate CN (common name)?
Not that I could find, or you wouldn't see an error like this.
Note, this is the IP address of the "cyberalarm" public site and this particular image is used as an example only. The system actually calls a "system API" URL on a URL I'm not prepared to release, but that too has no IP addresses in the CN.
Regardless, I try registering on the Police collection endpoint (previously located at <force_area>.cyberalarm.police.uk>. Do not confuse this with the local data collector software above... this is now the admin UI which the forces have access to; looks a little something like this.
Ahem... good luck making sense of that mess. You could cogently make the argument that if the system delivers "actionable data" like this, forget security... it's probably worthless.
Before you get there however, you need to click the "validate email" button they send.
Note the double protocol, making this a dead link. I later reported this one too.
Registration Link Broken: Check.
The first "thank you" for reporting the dangerous URL.
The next few days are spent relaying issues to <redacted> until finally, I'm told an independent penetration tester could not confirm any of the issues I'd raised.
I'll leave out the back & forth which followed, but TL;DR (huh - too late for that) they flat refuse to accept any of the issues I've raised affect the production version.
By now, I suspect <redacted> was growing tired of me parroting the same line, only to hear the same response.
8th October 2020:
I'm starting to smell a rat... so I download the installer again.
Hmm. They updated the installer the day after I reported the production build was insecure.
You can probably guess what's coming, but don't get ahead of me. Will they continue to claim I'm wrong?
9th October 2020:
I receive an email, out of the blue, from another NPCC Cybercrime Programme member.
By now, I'm losing my patience.
Note: I've redacted a paragraph, as it relates to an on-going investigation which isn't suitable for wider release.
Let me explain the above... because I reference the issues in the "test build" and I don't want any confusion.
Mindful of the them putting the wrong URL in the PDF, it's entirely logical to expect anyone in receipt of it would install it, as I did, believing it to be "CyberAlarm". With no formal update process and no notification from NPCC, that would remain the case. I explained the other issues directly related to the "production build" in prior emails to <redacted> and the other <redacted> from October 2nd.
Remember, this "test build" can't talk to a server for updates... it's been decommissioned "2 years" before I installed it. If it IS running anywhere, it'll stay that way until it's either removed or worse, an attacker leverages it.
They claim this version was "never released or promoted by anyone, least of all the Police" - and we now know that to be inaccurate... so I disagree with the suggestion there's no risk to the public.
13th October 2020:
The admission, in black and white, that the Police accidentally made available a massively-insecure "test build" to the public and said nothing. No article, no email, no "oops, we screwed up, don't install that"... just silence.
Worse still, they don't seem to understand the risk it presents... so you can understand why I'm finding it impossible to outline why the "production" build is equally insecure.
If you're still here, thank you.
I know there's a huge amount to take in and believe me, I've cut this down to make it somewhat readable.
Two members of the same force acknowledge I raised the dodgy URL and both thank me for alerting them - a world apart from NPCC's claim that I merely "stumbled upon" it.
The same force admits they released a insecure build and to date, I'm not aware of a single publication to alert the public.
The force/NPCC remain confident that PCA is secure.
Grab yourself a brew and let's dive in one last time.
24th November 2020:
I publish my findings, primarily focusing on the "test build" but referencing the same vulnerabilities (but different method of execution, in some cases) in the production build.
There's a much greater chance the production version is more widely deployed, so I redacted key elements to ensure I don't become the purveyor of zero-days.
25th November 2020:
NPCC send the first cease & desist letter.
TL;DR - "You're wrong, you knew you were wrong - pull the article by Friday 5PM, and stop using our image, you're breaching copyright"
26th November 2020:
Even after the ugly & unnecessary cease & desist, I tried one last time.
NPCC respond again, with another cease & desist... which you've all now seen, as it forms the majority of the official response they published.
In the second C&D, <redacted> questions why I didn't post the first C&D letter, which is a reasonable question.
I didn't post it because it's highly embarrassing for both NPCC and <redacted> and, as I said from the very beginning, I was trying to protect NPCC's reputation whilst following a normal disclosure process. I had assumed that the "olive branch" email I sent in return would make them step back, reconsider and perhaps question why someone would willingly commit career-suicide in the public view, by releasing demonstrably false information.
Alas, no such luck... just more threats and a libellous retort.
27th November 2020:
I pull the installer again, just to see if any of the other issues which don't exist... don't exist anymore. It was modified on the 14th of October.
29th November 2020: Lies, deception and defamation...
Something just doesn't add up.
We have the NPCC Cybercrime team filled with respected professionals, actively pushing a product which I can repeatedly demonstrate is woefully insecure.
Let's open that update from the 6th...
The 1024 bit RSA key is still there, but they've fixed the vulnerable "wget" call... so the installer is now safe(r). The app however, still has TLS verification disabled.
Swing and a miss.
I'd already caught them out once when Pervade stated the laughable "secret" key from the "test build" didn't exist in production. Now, they're at it again.
Someone at Pervade (it really doesn't matter who) is actively patching the very issues I raised the previous day... but claiming they were never there. Utterly abhorrent behaviour.
They say trust is a fickle thing. Whilst I might be able to overcome the hideous code, if they're willing to lie, repeatedly - and put the NPCC's reputation at risk, it's game over.
Confirming my suspicions...
Let's open the installer from the 27th... (which is the latest, as of this article)
They're using 2048 bit keys! Colour me shocked.
What about index.php?
They validate TLS too! It's a mess elsewhere, but let's brush over that for now.
I can't see the infamous "P3rv4d3" server secret either. Things are looking up! I still wouldn't trust it, but at least they are patching bugs, even if they didn't exist to begin with.
I really should try to cut down on sarcasm, it's not a good look.
Though, if someone more talented than I could explain this laughable bit of obfuscation, that'd be great.
What about the CREST STAR / NCSC-approved "pen test"?
Your guess is as good as mine.
Whilst it fits with the theme to simply bury the testing company, it's not beyond the realms of possibility that Pervade provided an entirely different or scaled-back application, limited the scope or even lied about the results. It's also possible the testing companies (there are apparently more than 1) are useless.
Keep this in mind...
There's a world of difference between "we found no issues" and "there are no issues".
It's also possible the testers found all of it (and more - there's plenty) and Pervade/NPCC simply ignored the experts and carried on.
Take this gem from their official response, for example...
I assumed (how stupid of me, by now) that a direct quote from a Red Hat expert would make them reconsider their ludicrous decision. If someone describes Linux as a footgun, it's probably not a good idea to play around with security controls that you clearly don't understand.
I could try again but let's face it, I'm a nobody.
So, I'll hand over to Mr Karanbir Singh.
Karanbir is widely credited with shaping the CentOS project as we know it today. He's not just a contributor, he's the project lead. I can't think of anyone better placed to convince the NPCC team that they're not only wrong, they're dangerously wrong. Keep in mind, SELinux is disabled in the production builds!
So do I, but I don't expect them to be forthcoming.
I offered to sign an NDA but even then, the source & pen test reports were "not for public record"
I could continue posting each & every vulnerability, but I think I've made my point.
One last thing though... breach of copyright for using the CyberAlarm logo?! If this isn't the definition of a criticism/review, I'm not sure what is.
A couple of things I really should have mentioned...
A few people have (correctly) pointed out that my claim that PHP 5.4 was insecure isn't always true. This is a nuanced topic & one I actively wanted to avoid, but I'll try to explain as quickly as possible.
Official vendor support for PHP 5.4 ended 5 years ago. Official support for PHP 7.2 ended.... on the 30th November 2020 (seriously!). The vendor's (The PHP Group) official advice mirrors mine; upgrade as soon as possible because there may be unpatched security vulnerabilities.
However, it's not quite as clear cut and although I don't condone relying on backports, I should have explained it more clearly. Just as you may opt for an "extended, 3rd party warranty" with your latest iPhone, platforms like RedHat/CentOS provide something called back-porting; the process of actively monitoring for new vulnerabilities and retrospectively rolling patches in to previous versions.
This is often misunderstood and causes no end of confusion, largely because whilst the official version stays the same (5.4.16 for example), the subversion (-xx) is often hidden from a standard "phpinfo" or "php -v" command. As a result, patches are often missed and vulnerabilities (even if you're responsible enough to update) remain unpatched.
Of course, that's the case here too.
The "test build" from October 2nd 2020 is vulnerable to the following...
CVE-2018-10547 (CVSS 6.1 Medium)
CVE-2018-5712 (CVSS 6.1 Medium)
CVE-2019-9024 (CVSS 7.5 High)
CVE-2018-7584 (CVSS 9.8 Critical)
CVE-2019-11043 (CVSS 9.8 Critical)
2 "critical" issues - never to receive a patch.
Where's the MIT license guys?
CyberAlarm, both old & new, uses the "phpseclib" library... distributed under the MIT license. I can't, for the life of me, find any reference to the required copyright... and if it doesn't exist, they're not providing the necessary credit and have effectively stolen the developer's work product.
Upon request of the media, I withheld this article to give them time to question NPCC & Pervade. Unsurprisingly, they doubled-down and refused to accept there are serious issues with CyberAlarm. Following a call between NPCC and the media, they offered to send a link "to have a good look at", presumably to prove it's safe.
As the final nail in the coffin, they sent the media a link to the current public "live" version with a modification date of 29/11/20.
Are there any issues? Of course.
Step 1: Upgrade PEAR - we wouldn't want to be left insecure, would we?
Step 2: Install 10 year old, outdated & vulnerable dependencies.
A 10 year old, outdated build.
A 10 year old beta test build!
A 10 year old ALPHA build! (Using an alpha in production - what could go wrong)
A 4 year old, outdated build.
A 9 year old, outdated build.
Do I really have to explain the risks of deploying an alpha build of anything to the NPCC? Again, this is in the installer, visible in plain text. Jeez...
For CyberAlarm - I hope it's pulled with immediate effect. It should never have reached the public in this state and those responsible for both developing it & presiding over its deployment should probably reconsider their positions.
For me - I hope NPCC do the right thing, apologise & retract their defamatory post... quickly.
Of all the vulnerabilities I raised in the "test" build, only 2 are fixed - CentOS & PHP. I say "fixed", PHP 7.2 went end-of-life on the 30th November 2020, but backporting... yada yada.
If we ignore those, let's briefly bullet everything else.
They don't deny this - but don't understand the risk. I've told them, a RedHat article explains why it's dangerous and now, thanks to Karanbir, CentOS have stepped in to prove beyond doubt, this is dangerous; unless nobody cares about exploits like ShellShock etc.
Still possible, though not through "getmon.php". Keep in mind this product is live in Schools, Councils, Mental Health Clinics, SMEs with valuable intellectual property and a Formula 1 team - I obviously can't drop a zero-day which puts them all at risk. However, untrusted input combined with "exec" - the outcome is self evident.
Mindful of the risk, here's a demo proving it's possible, with steps removed to protect the public...
Cross Site Scripting:
Still possible - part of the wider product rather than the data collector itself. Hint - Validate on input, sanitize/encode on output; anything else leaves you vulnerable to XSS attacks.
TLS Certificate Expired:
The production version points to a different endpoint which does have a valid certificate. Not that it matters however... as TLS security checks are disabled everywhere.
Attacker: "Hey, I'm a Police server with an important update for you..."
Collector: "You don't look like a Police server and your certificate has expired... so, what can I do for you?"
Attacker: "Extract this .tar.gz and execute the contents..."
Collector: "Sure thing"
... sure looks like an encryption key to me.
They don't deny using AES-CBC without a MAC either - but again, demonstrate they don't understand why that's not only important, but crucial.
Broken Encryption - Part Deux:
Exactly the same as the "test" build - all SSL/TLS security checks disabled.
Embedding server passwords in code:
Exactly the same as the "test" build.
The similarities between these two "completely different pieces of software" is staggering.
That's it folks. Thank you for listening and for heaven's sake, be careful what you install on your network.
Notes / Files
If you've downloaded CyberAlarm previously, you'll be able to hash the files to prove I've not modified anything other than filenames. If even a single character in any file changes, the entire hash would be different - thus proving we have different versions.