Monday, May 30, 2016

Cyber attacks on... *** toys???!

For those of you in the USA, enjoy Memorial Day and remember the veterans who are no longer with us because they died securing your freedoms.  For those of you everywhere else, happy Monday! Enjoy your work.

I read an article this morning that boggled my mind.  If you read the blog regularly, you know that I am a big believer in IoT security and then need to audit the devices that are perhaps otherwise slipping through the cracks.  Today, I learned that the Internet of Things already has far more types of "things" connected to it than I previously realized.

This article mentions that Trend Micro experts went on record to say that cyber attackers might target Internet connected sex toys.  Now before anyone goes and attacks Trend Micro for such a ridiculous statement, they do offer a more plausible attack idea.  Also, know that the press can and does misquote you frequently to get the best story.  I then have to wonder, was Raimund Genes misquoted when he said in response to the hacking question "But if I can get to the back end..."  If not, well done sir. That's an epic troll.

But maybe the better question is why are there network connected sex toys in the first place?  I did a quick search on Google and discovered that there's a whole niche market I never knew about for wifi and bluetooth enabled sex toys, primarily marketed to long distance lovers.

There's more than one manufacturer of wifi enabled sex toys, oh my
Okay, so has anyone evaluated these devices and applications for security?  Dare I say has anyone done a penetration test on the sex toys?  Okay, I'll report directly to terrible pun jail - that was simply too much...

But this an interesting facet of the broader IoT conversation.  I suspect few will care if their coffee drinking habits are revealed if their wifi coffee maker is hacked.  But I'm betting that if someone hacks your sex toy and posts your usage patterns to the web, that would be down right embarrassing. Another vector attackers might use would be denial of "service" (sorry for another terrible pun) - possibly leading to the world's first DDoS - dildo denial of service (okay I'll stop now).

There's a philosophical point here - are our lives really made better each time we connect another part of them to the Internet?  I would argue they are not.  Despite my highest hopes, my Egg Minder hasn't changed my life for the better.  What about the "smart" piggy bank I bought my daughter? I'm not sure that's really helped either.

Finally, there's probably a DFIR angle here too.  I suspect that forensics on the applications controlling these devices (or perhaps even the devices themselves) can be very lucrative in certain cases (though I admit those cases are probably few and far between).  I haven't examined any of these applications for residual data, but I'll bet there's something there.

*Note: The views on this blog are mine and mine alone. They don't reflect the views of any organizations with which I may be affiliated, my mom, my priest, or my dog.  If you are mad at me for something I say here, my advice is stop reading my blog.  But if you insist that your voice be heard, address it with me here. I have comments enabled for precisely that reason.

Sunday, May 29, 2016

Cyber bombs - Does the Pentagon think we're stupid?

Does the Pentagon think we're stupid?
Wait don't answer that... Any time you wonder what the government is spending your tax dollars on, just remember that the Pentagon now has a program where they are dropping 'cyber bombs' on ISIS.  But what is a cyber bomb?  I'm totally unclear on what a cyber bomb is.  But one of the first things that comes to mind is "are there different classes of cyber munitions?" Cyber bullets? Cyber cruise missile? Cyber tanks firing cyber sabots?

Update: Twitter follower Han Solo Mio noted that a Cyber A-10 might be the most appropriate munition and even created a pretty kick butt logo for it (used here with permission).

Cyber A-10 Thunderbolt

Analogies for cyber munitions are all bad.  The reality is that cyber space is very unlike the traditional battlefield.  As I see it, there are two primary differences between cyber and traditional warfare: prepositioning and deploying effects.

Prepositioning on cyber key terrain
The first difference is in prepositioning.  Sure we need to preposition physical assets for the physical battlefield, but when all else fails we have the 82d Airborne.  Wheels up anywhere in 18 hours.  The same can't be said for cyber.  We can't preposition assets on cyber key terrain in 18 hours. Just doesn't work that way.

Prepositioning assets in cyberspace involves deploying malware.  The longer you preposition malware in cyberspace, the more likely you are to be detected.  When the malware is detected in one location, you lose that capability everywhere you have it deployed.  The same isn't true for deploying a carrier battle group.  When a carrier battle group is deployed (and inevitably detected) we don't lose the capability to deploy other carrier battle groups. The same isn't true for malware - once detected it must be rewritten, changing the calculus for prepositioning cyber assets.

Deploying cyber effects
A cyber effect the more technical term for a 'cyber bomb' that the Pentagon has talked about deploying against ISIS.  The problem is that once an effect is deployed against any adversary with any detection capabilities, it is gone forever.  Cyber effects require prepositioning and are limited in their lifespan.  As soon as effects are detected in use, even commodity antivirus will detect them.

Stop the rhetoric
Instead of saying we are dropping cyber bombs on the enemy, perhaps we can just say we're hacking ISIS and call it a day.  The American people aren't stupid (okay, maybe we are) and don't need these weak analogies to feel like we're having an effect on ISIS' online activities.

Friday, May 27, 2016

Court blocks smart meter security details from public release

I posted earlier about a story in which Sensus filed a request for a temporary restraining order to prevent Seattle City Light from releasing documents submitted in an RFP.  Under Washington state law, when Sensus submitted to the RFP, their documents became public record.  Now they want to undo that, claiming in part that releasing details of the encryption used to "protect" the smart meters would lead to security compromises.  Infosec professionals know this probably means that they rolled their own encryption.  Separately, we've known for years that the only good encryption is that which is independently audited.

A copy of the temporary injunction is here.  A hearing is scheduled for June 9th to determine whether the injunction shall be made permanent.

I hope the EFF will file an amicus brief in this case to at least get security details of the devices released.  Sensus is trying to get pricing details redacted as well, calling that data a trade secret.  There are almost certainly other trade secrets detailed in the bid, but the moment at which Sensus placed those in a document that is publicly available, they lost trade secret protection as I understand it (of course I am not a lawyer).

If you live in Seattle or Washington, now is a fine time to talk to your elected officials about this case.  The outcome will likely have national effects on the on security research, particularly for anything that can reasonably be called "critical infrastructure."

Bottom Line
As with much of the public debate around security research, we need to remember that only the good guys follow laws.  The bad guys will obtain these devices and the data about these devices without the help of a public records release.  The public is not served by allowing these black box devices to be installed without proper independent security reviews.

Thursday, May 26, 2016

Security through obscurity isn't security at all

I just wrapped up a great few days at the EnFuse conference and I'm sitting in the airport waiting for my flight to board.  In the meantime, I started reading an application for a temporary restraining order (TRO) trying to prevent a researcher from obtaining details provided in an RFP to a public entity (Seattle Light).

Don't reverse engineer our device
The TRO, if granted, would restrict the release of an unredacted copy of the RFP data.  The company, Sensus, doesn't want their security controls known.  However, they apparently forgot that bidding on a public contract where those details were part of the RFP would expose them to release.  If public money is being spent on the devices (and it is) and the public will be forced to use the devices (they will) then the public should have the opportunity to evaluate the security of the devices.

However, in it's plea to the court, Sensus makes it clear that one of their fears is reverse engineering of the devices.  


Secure encryption
But the real issue is that Sensus apparently believes that encryption can only be safe if nobody can examine it.  Consider this excerpt:


I suspect that at first glance this makes sense to some.  But of course we know that encryption is only safe when exposed to public review.  And even then, it may still contain vulnerabilities.  This statement alone puts Sensus in a delicate position to defend later.  The Sensus VP makes a declaration under penalty of perjury that releasing this data to the public would create a risk to cyber security since an attacker would compromise their encryption data.  

But if that's really the case (and not just hyperbole) then Sensus' encryption is fundamentally broken.  Another possible  option is that the Sensus' encryption deployment is completely secure but their VP simply doesn't understand what he's talking about.  Admitting that however would put Sensus in a delicate position since it would call into question the rest of their claims. 

Chilling effects
Finally, Sensus threatens that this required disclosure will have a chilling effect on it's participation in the public marketplace.  Sensus says that if they are required to disclose RFP submissions for public review, they will either withdraw from the market or charge a substantial premium to compete in it.


Honestly, neither of these options sounds that bad to me.  If Sensus removes itself from the public AMI market because their devices cannot withstand public security, we are probably all better served as consumers.  If Sensus imposes a substantial premium in its bids (as threatened) this isn't bad either.  Other companies who are not afraid of public security will step in to fill the void and again the public is better served.

Independent security evaluations
Just as the fox can't guard the henhouse, the engineers who build a product can't be responsible for evaluating its security.  Independent security evaluations are required, particularly before your devices and designs are subject to public scrutiny.  Rendition Infosec performs a number of these evaluations annually and we regularly find that engineers build products with what they were taught to be "best practices" that are in fact fundamentally insecure.  While the engineers say the product was built using the best the industry has to offer, security simply isn't understood.  Absent independent reviews, we all suffer.



Monday, May 23, 2016

Title inflation is killing infosec

Title inflation - we've all seen it.  I've worked for one of those organizations where everyone is a director.  Apparently clients feel better about the fact that they are being handled by a "director."  

As a side note, I often wonder if clients know they are being bamboozled by the company.  Probably not - we're wired to want to feel important, so it's probably better off to continue to accept that they are being handled by a "director" even if it is totally BS.

I was inspired to write this when I saw today on LinkedIn that a security wannabe with zero formal penetration testing experience was hired by a firm I used to trust and is now being billed to clients as a "senior penetration tester."  I almost lost my lunch picturing competing against this company down the road.

Why does title inflation matter?
Because clients don't often know any better.  Senior must be better than Junior.  Master must be better than Senior.  And what about the fabled Infosec Evangelist? Where do they rate?

Because there is no standard that specifies what a skills a senior person in any infosec discipline should have, it's easy for consumers to become confused.  It's pretty easy to think that none this matters.  But the FTC cares a lot about consumer education.  If the consumer can't figure out what they are buying, that's usually when the FTC steps in.  The FTC did it with cars (that's why the MSRP  sticker is on the car now).  

Personally, I'd prefer for the government to stay the heck out of infosec regulation.  This job is challenging enough without government coming in and regulating titles or licensing practitioners (which is arguably the easiest way for the government to regulate titles).  If you think that sounds crazy, know that the UK government already largely did this with CREST.  They then tried to bring this to the US with the NBISE, so it's not out of the realm of possibilities.  And once the government gets involved with regulating an industry, hold on tight.  In GA for instance, it's a crime to arrange flowers without a license.  Freaking ridiculous.

Anecdotal story time
At Rendition Infosec, we hire only the best.  But I also don't inflate titles (or egos).  But I have to deal with the fallout from lack of title inflation all the time.  I don't care what my employees call themselves internally - and I can't share some of the "internal use only" titles they've come up with for each other - but titles matter to clients.  

On a recent bid, a client came back and told us that they liked our bid but that a competitor promised to only use "senior penetration testers" on the engagement.  I asked if they understood what that meant.  They admitted they didn't know, but they must be better than the regular penetration testers Rendition had promised to use.  We took that challenge and provided resumes for our testers.  The competitor provided resumes for their "senior penetration testers."  Ours won hands down.  The client suggested that maybe we should give our employees new titles.  While I pragmatically agree that might help win some bids, I can't become part of the problem.

Parting thoughts
I would be happy to write some guides in helping consumers shop for infosec services if there are people out there that think they would be valuable.  In the mean time, educate the customer about what you can (and more importantly can't) do and check your inflated titles at the door.  If you manage people in infosec or are running a business, please stop over inflating the titles of the people who work for you.

Sunday, May 22, 2016

Check your browser tabs (and stop watching porn at work)

On behalf of forensics professionals everywhere, I implore you to stop watching porn at work.  And by at work, I don't just mean within the confines of your place of business.  With our new "always on" working environments, lets just agree that looking at porn on any digital devices you use for business is a REALLY bad idea.

If you're a politician, you've made a conscious choice to live in the public eye.  In this case, your whole life is an open book - for better or worse (usually worse).  This means you don't really get any separation between work and personal computing - especially if you post to social media.

Don't look at those other "research" tabs

I've seen other examples of embarrassing browser tabs and other embarrassing filenames on remote virtual meetings and even sales presentations.  But I've honestly never seen a politician post a screenshot like this - showing porn in two other tabs.  I'm not judging and I'm definitely not victim shaming.  But I will simply point out that mistakes like this can't happen if you don't look at the objectionable material on your work machine the first place.

But it was research....
If I had a dollar for every time I've heard this, I'd be sipping Mojitos on my own private island in the Bahamas.  But I must admit that Webb's explanation for the type of research is a first heard for me.  Something to the effect of checking for malware that might be preventing him from filing his candidacy.  That's oddly specific (and unlikely to be targeting politicians through porn sites).  Later Webb clarified that he really meant he had been battling malware for weeks on his computer and those tabs were opened from links on his Internet dating site.

Separate work and play
This incident points to a bigger issue and there's a lesson to be learned here.  Separate work from play.  Period.  In other words, use separate machines for work and play.  For those of us who are always on the road, this is harder to do since it means traveling with a minimum of two devices.  But at Rendition Infosec, we regularly find that when work and play mix (particularly on corporate machines) it's the business that suffers.

Play can be anything from online college classes to running a side business to Internet dating.  Whatever it is, if it's not business, employees should refrain from doing it on their business machines.       An employee infected when mom sends that stupid powerpoint slide show with jokes can't lose confidential information if it's their personal laptop they infect.  Policy will take you far in this regard, but technical controls preventing human stupidity trump policy every time.

In healthcare (some Rendition's biggest customers) we keep seeing people losing PHI.  You can't lose the PHI that isn't on your personal computer.  IT security has a hard enough job worrying about your work machine.  If they have to worry about your home machine (which is probably administered by geek squad or your teenage kid) that just compounds problems.

So before you fire up that inappropriate website on your work computer, take a trip to Best Buy and get a machine for your own personal use (I bought my kid a laptop to learn to program on for $200). With a personal machine in hand,  you can do all of that freaky deeky doo stuff that you want to do without risking the business's data or reputation.

What about Webb's malware problems?
I got all ranty and almost forgot about Webb.  If Webb is being sincere (as many people believe he is) then he needs some information security help.  I hope he funds information security spending in congress if he gets elected.  But as for now, I'd recommend some professional help with his computer.  If Webb needs help and can't find a trusted source for help, I'll happily offer up someone from Rendition to nuke and reimage his computer (the only real solution for a malware infestation).  We'll even check for malware along the way.

Tuesday, May 17, 2016

Is a false sense of security better than nothing?

When faced with the opportunity to buy snake oil, we frequently jump at the chance to do so.  Our tendency to do so is so strong that when faced with the choice of snake oil or nothing, we frequently choose snake oil. I think we're just hard wired that way.  It's like we can't help ourselves.

Last week, Stefan Esser released a new iOS application that was purported to detect jailbreaks on the phone.  The idea is that if you are a target, your phone may have been jailbroken without your knowledge.  This application was supposed to alert you to this fact and allow you to take some corrective action.

Esser's Jailbreak detection application

The problem is that any jailbreak detection software is notoriously easy to bypass.  So much so that any application claiming to be able to detect this is effectively no better than snake oil.  And Apple knows this.  So they pulled Stefan Esser's tool from the App Store.  This caused some controversy, because there are other (lesser publicized) applications that have been published to the App Store that were not pulled.

Why was Esser's app pulled?
Probably because Esser has been a thorn in Apple's side for a while.  He's all the time publishing security research and generally making Apple look bad.  I get why they pulled his application (which TBF, he was hyping) and left others alone.  I'm not saying I agree with it, but I get it.

Was Esser's app snake oil?
Depends on your stance.  Esser certainly demonstrated that it can detect some jailbreaks.  But as I said earlier, these detections are notoriously easy to bypass.  Is a partial detection better than no detection?  Yes, if it detects that you are compromised. Presumably, Apple reasons that the false sense of security it gives those with no detection causes more harm than the good it may provide.

Why do you care?
I'm just along for the ride.  I don't have a vested interest either way, though I tend to agree with Apple that a false sense of security causes more harm than good.  At Rendition Infosec, we see this all the time with antivirus.  People think it will protect them from all badness and of course we all know better.  I tend to think that anything that promotes this "silver bullet" mentality is probably bad for us as a whole.

Sunday, May 15, 2016

Firmware implants (aka the APT bogey man)

I was recently brought into a discussion about verifying the firmware on laptops purchased from a particular manufacturer.  The security concern was that a particular vendor might insert malicious code as a way to get into the organization - presumably acting on behalf of some nation state.  This is a very mature concern to be asking about - the reality is that most have no idea whether or not their firmware is secure.  But supply chain compromise is a very real fear to have.  Buying from trusted manufacturers and vendors can help, but doesn't completely solve the issue.

I'll start by saying that laptop firmware (all firmware actually) is hard to verify.  The problem is that short of extracting the firmware using anything short of a chip extraction and an EEPROM reader (destructive, difficult, and expensive) you are relying on the firmware itself to aid in the extraction of the firmware.  If the firmware is compromised, it could just as easily present something different to you when you try to extract it.  Using a software based extraction method is like asking a criminal if they committed a crime - you're asking them because you have a suspicion something is wrong, but they could lie to you just as easily as telling you the truth.  It's unreliable.

A highly respected colleague chimed in on the issue as well and said that firmware implants are probably just a means of persistence for other code.  This is certainly what I would expect.  With good system monitoring (and memory forensics) you'll see that other code, no matter how stealthy it is.  The goal of the firmware implant is to make sure that when you reload the machine, the badness comes back at some later date without the attacker having to re-compromise the machine.

Now for some cold, hard truth: If you have supply chain concerns, get a new supplier.  Yes, practically all of our digital devices are made offshore now, but some suppliers are more trustworthy than others.  If this is a concern in your organization, flashing the firmware with a known good is also recommended.  While an infected firmware could theoretically ignore the flash, it's highly unlikely to work.  If the new BIOS has some recognizable difference that you can observe all the better.

I'll close by telling you the same thing I tell any of my Rendition Infosec clients looking for APT boogey men: Yes, very advanced attacks (and attackers) exist.  But honestly you're better off using your limited security resources to address the easy stuff: e.g. net flow and PCAP monitoring.  If you get these two right, you'll see the attacker exfiltrating data from the network - even if they ARE using a firmware implant.