IE Zero Day: Response Required

Another day, another zero day vulnerability (Gosh, I love that term. So ominous. Like seeing a mushroom cloud). This time it’s in IE 9 and 10.

Side track: It’s amusing to me that software that a vulnerability in software that has been outdated for 3.5 months can be so devastating, but I guess that’s the corporate world.

There are a number of pretty good technical writeups on the exploits for this one seen in the wild already, so I won’t go into that. I’d rather address what you should do if you’re a medium to large company.

A senior analyst at my company was tasked with looking into this threat. He read the title of the FireEye post, sent an email that said “Upgrade to IE 11” and called it good. Not terribly helpful. There are a number of steps you should do to start looking into this.

The first question you should ask is, “Have I been attacked already?” If you have a SIEM tool this should be easy enough to determine. The first domain you’ll want to look for is the VFW site (vfw.org), since that was the site that was originally compromised.

If you don’t find traffic it is a very good sign. I wouldn’t relax quite yet, though, as someone could’ve accessed this site from their laptop at home. FireEye put together a list of the IPs and domains they’ve seen that are probably C&C servers connected to the VFW compromise, so it’s good to watch for these too:

First Seen Last Seen CnC Domain IP
2013-08-31 2013-08-31 icybin.flnet[.]org 58.64.200.178
2013-05-02 2013-08-02 info.flnet[.]org 58.64.200.178
2013-08-02 2013-08-02 book.flnet[.]org 58.64.200.178
2013-08-10 2013-08-10 info.flnet[.]org 58.64.200.179
2013-07-15 2013-07-15 icybin.flnet[.]org 58.64.200.179
2014-01-02 2014-01-02 book.flnet[.]org 103.20.192.4
2013-12-03 2014-01-02 info.flnet[.]org 103.20.192.4

And as long as you’re searching for traffic, look for anything hitting

First Seen Last Seen CnC Domain IP
2012-11-12 2012-11-28 me.scieron[.]com 58.64.199.22
2012-04-09 2012-10-24 cht.blankchair[.]com 58.64.199.22
2012-04-09 2012-09-18 ali.blankchair[.]com 58.64.199.22
2012-11-08 2012-11-25 dll.freshdns[.]org 58.64.199.25
2012-11-23 2012-11-27 rt.blankchair[.]com 58.64.199.25
2012-05-29 2012-6-28 book.flnet[.]org 58.64.199.27

As well.

If all of those searches turn up empty, you might be ok. But depending on your infrastructure, it’s still possible someone took a laptop to Caribou and got compromised when they weren’t attached to your network, so it wouldn’t be a bad idea to create an alarm for traffic to those addresses for some on going awareness.

At this point, you can be pretty confident you aren’t already infected (from this specific exploit). So how can you stay clean?

The good news about this exploit is that it is a new way to use old attacks. FireEye has also has a good write up on some of these techniques. You should make sure your end point protection is watching for these techniques (which it should be already).

If you read the article closely, you may have noticed that the exploit checks for EMET to be installed and just gives up if it is. “Shouldn’t you just install that?” you may ask. And it’s not a bad idea to install it (or to just add a file at the location the exploit checks for). But that’s a restriction that this specific exploit decided to give up on. The next attacker to use this vulnerability may decide it’s worth figuring out how to evade EMET, so that’s not a silver bullet.

And sadly enough, that’s most of what you can do at this point. Look for evidence you’re already compromised and then make sure your existing security defenses are working as well as they can. I suppose you can also cross your fingers that Microsoft comes out with a fix soon (they did create a press release, but it amounts to “Windows server is less vulnerable, polar bears can be dangerous, and we’ll get back to you on this”).

Extra: Some interesting research on using Windows Crash reports to catch infections like this on your network here, but it’s pretty theoretical. You might be doing some pioneering if you want to go this way.

NTP and DDoS Attacks

A novel new method of creating a DDoS attack has been found: NTP. I’ve read a number of good technical explanations on how the attack was performed and the enormity of the data the attack sent (400 some GB), so I’d like to take a step back and talk about DDoS attacks in general.

What is a DDoS attack and how does it work?

DDoS stands for distributed denial of service. It’s a fancy way of saying an attacker uses multiple computers to keep legitimate users from doing what they want to on the internet. Here’s another way to explain it:
Let’s say you hate your local department store and you want to keep them from helping their customers. With this nefarious intention you go to the store, stand at a register, and refuse to move through.
You’re not going to be very effective, will you? But what if you sent everyone in town a message that if they go to the store and ask for a free coat at 4 in the afternoon this Friday, the store will give them one. You can picture what will happen, the store will be filled with people wanting free coats. Customers buying real merchandise will have trouble getting to the checkouts, and the store won’t be able to sell anything.
(On a side note, if it helps the Civil Rights Sit Ins were essentially a DDoS attack)
That’s all there is to it. You send so much traffic to a website or a service that they can’t respond to legitimate traffic. Where it gets interesting is how you send that traffic. Just like there’s no point to standing at a register by yourself, you would never be able to generate enough traffic by yourself to shut down a website. To perform a DDoS attack you need to get other people to join in.

NTP DDoS Attack

NTP is a protocol for making sure that different computers have the same internal time. They’re fairly common on medium larger networks (universities, businesses, etc). They can operate for years without anyone thinking about them. Except for, apparently, some individuals looking for a way to create a DDoS attack. I’ll leave out the gory details, but attackers found a way to use these NTP servers to generate a previously unheard amount of traffic at a target.

How scary is it?

What’s interesting about this attack is that (as so often happens) it isn’t difficult to prevent. It doesn’t take any great, brilliant minds to figure out how to prevent this kind of thing happen in the future. Attackers just found something most people don’t consider threatening enough to secure and took advantage of it. How many other “set it and forget it” technologies do we have on the internet that could be used in some clever, malicious way?
On a more positive note in some ways this may be a good sign. For quite some time now DNS has been the standard DDoS weapon, and because of this the internet community has been encouraging companies to take steps to prevent their DNS servers from being used for these attacks. If attackers looking to wield a DDoS attack are moving on to other means, DNS may be slowly becoming less appealing. Our defense methods are working, they just take time.

WebSockets and Security Infrastructure

Web sockets are new and very cool. If you’re not familiar with them wikipedia (as always) has a pretty good article.

Working on a websocket test app recently I had a connection that kept on failing.

A thought occurred to me, so I switched to wss from ws causing my traffic to be encrypted. My company doesn’t currently decrypt any SSL traffic on our security appliances, so this effectively turned my socket into a black box to my proxy, IPS, and firewall. That worked.

I ran a query on our proxy and sure enough, there was the traffic for both requests.

I decided to look on the external firewall.

Only the SSL traffic was showing up here. So something (proxy, IPS, internal firewall, or external firewall) was knocking down the unencrypted socket traffic.

For the record, I’m aware that it’s better to decrypt/recrypt SSL traffic on your proxy. Chalk it up to a mixture of old technology and budget concerns. You work with what you have.

Lessons learned here:

Websockets are cool, but might get stopped by your internal security infrastructure.

If encrypting the traffic resolves the issue, you’ve just discovered a security hole in your environment.

Facebook: Finding Ways to Monetize

Facebook Paper is a new app for iOS that came out this week. It was advertised as shiny and new and had an oh-so-hipster commercial.

So I gave it a download. It was shiny and new, just as advertised. It lets you browse your Facebook news feed, has slick animations, a “flat” design (which is all the rage these days, I’m told). It’s also a news reader and allows you to add different categories of things you might be interested in such as technology, pop culture, etc. I like the app as an app, but there’s one key feature missing from the news reader: You can’t pick the news sources that make up those categories.

Let’s take a step back for a second. Facebook isn’t evil or the antichrist, but they are trying to make money. They were soaring high for quite a while when desktop browsing was the most common, but shortly after they had their IPO they crashed pretty hard. You can find a bunch of different reasons for the crash, but the fact that their desktop usage dropped off while their mobile usage surged was a huge part of that. Facebook had no ads in their mobile apps, and thus no way to make money off of them.

It’s not hard to connect the dots that Mark Zuckerberg connected: Facebook needs a way to make money off of it’s mobile apps if it wants to survive.

Their first step was Facebook Home.

Still no ads here, and no way to make money, but they’re learning. At first Home got terrible reviews, but Facebook learned and improved upon it, working towards an app people would actually use. To me Paper looks like a next iteration of that. It’s a similar strategy to how Facebook got so popular in the first place: get something cool that people want to use, then advertise to them when you have a ton of users.

Which brings us all the way back around to Facebook Paper and the fact that Facebook chooses what news articles you see in a category. Their end goal is to sell advertising and make money. To do that they want all of the information they can get about your online activity, in this case the articles you read.

So why did it make me pause? Two things.

  1. If you use Facebook Paper as your only news source your news consumption is selected by Facebook, and potentially the company willing to pay them money.
  2. If Paper becomes popular Facebook has an incredible gold mine of data about you and what you read online.
Facebook isn’t evil, but they are trying to make money. It may not be happening yet, but there’s a good chance they will start to advertise to you using the extra data they’re able to gather, or sell your data quietly in the background. Before you become invested in Paper, think about if you’re comfortable with Facebook having that kind of data about you.

How Many Requests?

The web used to work like this:

I send an http request by typing say “google.com” into my address bar, I get a single page back.

Simple, right? One request, one
page. No hassles. But then people wanted to add pictures to pages and the easiest way to do that was to make your web browser think you’d typed in a request for the image and send it back, so it became:

One request, one page that makes you send another couple requests for pictures.

Let’s play a game I like to call, how many requests does it generate. Here are some results:

google.com/voice:

Cnn.com:

Facebook.com (This one surprised me, I was expecting a whole lot more)

So going to one web page is really like going to anywhere between 75 and 243 pages. It’s not a problem, as long as the infrastructure of the internet can handle the load we put on it. But the next time it’s a little slow, cut it some slack. It’s handling a ton of activity to tell you how many likes your picture got.

Is Google Chrome Actually Listening?

If you’ve been watching security news this week, you’ve probably seen a number of articles about this exploit that the writer claims allows Google chrome to be turned into a surveillance tool. If true, this could be very concerning for Chrome users, but with all things cyber-security related, it’s best to apply some analysis to the situation before one runs to the hills. Here is some of the analysis and questions I asked myself going through this exploit.

What exploit is being presented?

This might seem simple, but it’s an important question to start with. What are you being told is actually broken and exploitable? Most demonstrations of an exploit require lots of other technologies to make them possible and the media will often mix these other technologies into their articles as being vulnerable too. So what should be focused on? In this case, the author is claiming that when you have given permission to a website to use your microphone, if a second window is popped up, it doesn’t always display the “recording” icon in popups (Does anyone not close popups with a vengeance? I thought humans have hated them since the 90’s).

Are non-vulnerable technologies being presented as scary?

Now it sounds a little less scary. It’s really just saying Chrome isn’t confirming your permissions setting to you (a bug, no doubt, but far short of turning chrome into an automated surveillance tool against the user’s will and without their knowledge). If you watch the youtube video on the bug, most of the video is spent on voice recognition and popups and demonstrating how the library the author is using can highlight predefined words as the library recognizes your speech. While this has it’s own implications for privacy, we are now very far away from what the exploit is actually about.
As a side note an easy way to raise fear in a person when you’re telling them about a vulnerability is to make it personal and to use buzz words from current events. Whether your discussing a facebook, email, google, gmail, or other kind of vulnerability telling someone you can catch their conversation about finances, boys they think are cute, or the NSA and Syria is a sure fire way to get attention. If someone does this, you should question how solid their exploit is and their motives in presenting it.

Does the presenter have anything to gain?

Motives in cyber-security are an oh-so-interesting topic, but let’s stay focused to Chrome and it’s reluctance to show us that it’s recording.
It took me a couple times reading through the article to really catch this line:
Now that’s interesting. It’s not an empty boast, his library is on github and has a number of downloads. But it is interesting that his library comes up twice in a not-so-long article, and is significant focus of his demonstration of a vulnerability that is only tangentially related. His demo code also has calls to Google analytics, allowing him to see how many people are running his demo.
He also discusses a rewards:
Mr. Ater isn’t just crying wolf, this is an issue. Google needs to fix it. But it is also possible he used a few fear tactics to generate media attention for his work.

Target Malware Attack: Are you at risk?

It’s recently been made known that the Target security breach was at least partially caused by malware installed on their POS systems. There are already a number of good technical explanations of how these work, so I won’t add to that. I’d rather discuss if your business is at risk. Most of the content from this post comes from an email I sent my manager on the subject.

What the malware does

When a POS system reads a card, the data travels in a path like this

Physical credit card -> Credit card reader -> USB Port -> RAM -> Application -> Encryption -> Hard disk or network

The data obviously may move back and forth between the application and the RAM a few times as it is processed, but the most vulnerable point in the path is when the data is in RAM before the application has touched it, and hence before the application can encrypt the data. So for at least a short time the data is sitting in RAM unencrypted, which is where the trojan can grab it.

Once the trojan has the data, it saves it in a DLL and then waits for a time when most stores (and consequently their networks) will be busy before sending the data to a control server over NetBIOS (which is how they get
data out without the POS having an internet connection) that can then forward the data to a server on the internet (there isn’t a ton I could find on this last step, but it would probably look like other data exfiltration).

Why Target was vulnerable

Apparently the hackers were able to get this software on a large number of POS systems at Target and scrape credit card data during the busiest time of the year (Black Friday). Every customer who comes through target has to pay some how, and a lot of them use credit cards. The high rate of customers through each POS system meant that a single compromised system would see a lot of individual’s information.

The fact that until now POS systems have not gotten much public attention for needing to be secured probably means they’re running old operating systems (I’ve personally seen a lot of XP) and probably don’t get patched
frequently may have made it easy to infect large numbers of POS systems very quickly.

How can I stay safe?


If you have an IT department capable of network segmentation, but your POS machines on a segment by themselves, and watch for any ICMP traffic coming out of them. Also lock down the ports that they are allowed to communicate with. Obviously they need to get windows updates and communicate with a server to send sale data, but you should know every port they talk to and prevent them from doing anything you don’t understand.

If you don’t have an IT department capable of doing this, make sure you patch your POS machines frequently. Run antivirus and malware detection on them. If you buy your POS machines from a vendor who handles patching, make sure your vendor is patching them and keeping them up to date.

Engineers vs. Help Desk Technicians

Second post coming out of my work combining two teams and creating a help desk.

I’ve spent a lot of time studying the different between an engineer and a help desk technician. I think it’s easiest to approach it through bullet points:

When presented with a problem

  • A help desk technician takes down a name, a couple keywords about the issue, and searches a knowledgebase of some kind or takes the notes to a supervisor to ask them what to do.
  • An engineer has background knowledge about the system or technology that is having a problem. They wrote the code, created the configuration files, etc. They turn on the creative side of their brain and start working on a new solution.

When asked to give an opinion

  • A help desk technician panics because opinions aren’t in the documentation.
  • An engineer spews forth sermons on the benefits of VI over VIM, chrome over firefox, etc.

When told they are going to support a new technology they haven’t worked on previously

  • A help desk technician cringes because they have to memorize how to find more documentation
  • An engineer gets excited because they get to learn something new.
I may sound a little critical, but I’m not trying to be. I’ve been both of these roles at various times in college and after, and they’re two different skill sets. One is just easier to train on a new job than the other. What’s also interesting is that a person in either job title can act like the other, but one transition works better than the other.

Help Desks


This post isn’t security oriented, or technical. It’s just what I’m working on most recently. At work a management decision recently blended our Network Infrastructure and Security Teams together and I was asked to run a project melding the two “on call responsibilities” and figure out a single point of contact or help desk setup so people would know how to contact us.

Help desks are tricky things. The ideal (most expensive) solution is to have happy, highly intelligent people on call 24 hours a day so that when the phone rings, someone who can immediately answer the question is available immediately.

helpdesk.jpg (648×427)

But it isn’t cost effective to have your software engineers sitting on the phone all day doing nothing but waiting for a call, so you start having them do project work and and watch the phone too. But then, their project work takes priority and they can’t answer the phone so you hire a less qualified person that you can pay less to watch the phone the time the engineer can’t be available and have the engineer write documentation for the less qualified person to cover whatever they don’t know.

That solution sounds good. The less qualified person answers the phone, finds the documentation or script they need and can answer the question. But engineers write documentation for engineers. When you have someone with less experience or education than an engineer reading it, the call usually turns into this on one end or the other:

I don’t think there’s a good way to convince engineers to get excited about answering phone calls. Maybe donuts every Monday or something. Really, help desks are just difficult. And you do the best you can with the people and skills you have.

Is I Robot coming?

Google has been in the news a lot lately for buying a number of robotics companies. If you’re not familiar, just search “google robotics” and read an article similar to, “Google plans to take over the world with advanced robotic war machines” (or something that sounds catchier with the same meaning).

Basically, google purchased a company that builds robots (pretty advanced ones, some of them pretty creepy at the time of this writing), which sparked a lot of discussion about which Sci Fi movies were coming and if we have to start worrying about terminator or I Robot coming true.

We’re still pretty far off from either of those. Don’t get me wrong, Google’s robot’s are still scary and could be dangerous, but they’re not going to be creepy in the way I Robot or Terminator was. Google’s robot’s have strong movement ability, agility, and are able to carry a lot of weight. But they’re not advanced in AI or human interaction.

These robots are really no different than a UAV or a car with a remote control, they’ve just developed different methods of locomotion. There’s no intelligence there, no ability to decide to kill their human masters and turn them into batteries.

What should actually scare us are things like IBM’s Watson which played jeopardy really well.

Even though Watson might just look like a box it’s ability to correlate events and answer questions is a lot closer to what makes I Robot and Terminator scary. The advances IBM made building Watson brought us closer to I Robot than any “big dog” running around a parking lot. I guess the lesson here is to package your robots cute so people won’t panic.

NOTE: There’s another article about what makes robots creepy, but hopefully I’ll have another post on that later.