You Can’t Secure Your Environment With Closed Source Tools

For years I’ve worked with commercial security products and services and worked to incorporate them into a cohesive security program to provide my company with effective protection. This has proven challenging since the operation of these tools was always obscured to me. Setting up open source tools in lab provided transparency and control that I didn’t have with the commercial products we used on the production network.

Let me use IDS as an example, I’ve used both commercial tools and managed services for intrusion detection and prevention. In both cases I receive alerts for activity that is indicated to be an attack. In both cases there is no detail information about the alleged attack. I have to place some level of trust in the vendor that the rule that fired the alert was accurate. This on the surface it would seem obvious the rules would be accurate with the occasional false positive.

When I’m able to see these rules, the first step in responding to an alert is to review the rule and determine the likelihood that the rule is an accurate finding. From that point I’m able to prioritize how much effort and how quickly I investigate the event. If the rule is poorly written or overly general I remove the rule from my ruleset. If the rule is well written and unlikely to be a false positive I can increase the priority and pursue the investigation. If I’m unable to evaluate the rule directly I am left guessing at a priority to investigate the event and I’m only able to validate the rule through a lengthy investigation of the event. With hundreds of events being reported daily it’s impossible to investigate each of them to resolution.

Approaching this task with a long view plan, you can save the output of each investigation and learn what alerts are valid and which are overly general. Since there are often thousands of rules in the rulesets of these products and the vendor is adding to them constantly the task of tracking which rules provide accurate results and which do not is nearly impossible, even if the tool provides an ability to document your evaluation of the rule. When these tools or services are provided by a third party the rule sets are considered proprietary and integral to the value of the company. In all cases it’s been difficult for me to get the detail of the rule from the vendor in some it’s been impossible.

This complaint applies across almost all products in the information security realm, currently the vast majority of these are signature based reactive tools. Anti-Virus (AV), Intrusion Detection / Prevention Systems (ID/PSs), Security Information and Event Managers (SEMs), and Vulnerability Assessment Scanners (VA Scanners). Responding to the indications from these products requires a depth of experience in both information security and operations that isn’t found often. The volume of alerts from these products and the scarcity of the people able to respond appropriately to them creates a challenging situation for any team.

Open source tools have both open code that can be freely modified by anyone and they also have an open ruleset that can be trimmed or expanded by the user. These freedoms offer the chance to tune the rulesets to produce valuable output. It still requires a significant effort but the effort produces permanent value immediately. The savings on purchase price will be offset by the cost of experienced people and time required to implement the solutions. It’s truly a situation where people make all the difference.



Government Work From Home Scams

Some time ago there was considerable concern in the security community over shortened link sites. The first one I remember was tinyurl.com, these sites allow you to enter a long URL and they provide you a shorter URL that will redirect to the site with the URL you provided them. With the increase in the use of Twitter and SMS which limit the number of characters in a message these services have become more common and easier to use.

The security concern with these is the recipient can’t see where the link goes before they click on it. For years education efforts have educated people how to determine where a link actually pointed. With these shortened URL’s it’ impossible to tell where they lead the only precaution possible is to trust the person sending you the link. With the number of people sharing links on Twitter it’s very difficult to determine which senders are trustworthy and which are not.

To alleviate some of this risk the US Federal Government started their own service to allow anyone to shorten links ending in .gov. This service produced a link that also ended in .gov, users could trust the link because only .gov sites would receive these shortened links. This was a relatively quick fix to provide a higher level of trust to shorted links. Unfortunately the fix wasn’t completely thought through and has itself been abused.

Some web sites are vulnerable to an attack called Open Redirect. The reason sites are vulnerable is because they accept a parameter from the web client and redirect the client to another site determined by that parameter. On the surface this sounds reasonable because no client would pass a parameter for a site they didn’t want to visit. The issue is that an attacker can present a user with the link and parameter that appears valid but takes the user to a malicious web site. This vulnerability wasn’t considered when the gov link shortening service was designed.

Since some .gov sites are vulnerable to the Open Redirect and no authentication is required to generate the shortened URL Attackers have realized they can use the .gov short links to redirect the user, twice actually, to their malicious sites. These sites currently aren’t installing malware or committing any other covert actions, they simply use the trust of the gov links and the promise of work from home opportunities to phish personal information from users.

When implementing any solution one of the steps should be to brainstorm how the solution could be abused and identify solutions to the most likely abuses. With a little extra effort the .gov shortening website could have the added capability to scan the long URL and verify it wasn’t vulnerable to the Open Redirect before generating the short link. This solution could still be implemented now but a number of users have already been impacted and trust in the short link service has already been damaged.

Reporting is the Goal

In many projects I find that companies and leaders don’t have a good understanding of the goal of their projects. I use the statement that ‘the reporting is the goal’ and get sideways looks from people. From the management era of you can only succeed at what you can measure, I agree to a large extent. When you are starting a security project you should know what success will look like, and hopefully you should be able to measure that before you start. ‘We’re going to understand the layout of the network and update that understanding weekly’ looks very different from ‘we’ll protect assets from intrusion’.

The issue with most projects is that they don’t have a clear goal and lacking that there is no way to prioritize work. Understanding what the payoff is for a project up front is what makes the project workable. Most people today are swamped with work, more than any two people could complete. What that means is they will have to prioritize the work they do and they are going to spend time working on the project that provides the best payoff to them.

The reality of this is you don’t simply need to sell the project to management, you need to sell the project to the people who are going to make it happen. The best part is that these people are the most discriminating individuals you might have to sell ever. They need to believe that the success of the project means a better life, easier work, or more autonomy for them.

Most IT folks are lazy, the kind of lazy that will put in 40 hours of overtime to ensure that a 5 minute weekly action is automated. They want to see the right thing happen but that also means fewer problems and less work on their part. What this buys you if you focus it correctly is a more regular environment and a regular environment is a bonus for security. If these folks understand what the project is going to do for them, they will work tirelessly to advance the project.

National Blog Posting Month and Me

I heard about this last year and didn’t participate then but I’ve been mulling over the idea of starting a blog for some time. Now that I’ve decided it’s really something I want to try and I’ve started my wife challenged me to write a post a day for a month. Funny thing is we agreed to both do this and we decided to start in the next month which happened to be November. When I was looking at resources to help me with the effort I stumbled on the NaBloPoMo post on WordPress. It was great realizing that our challenge had happened to fall on the national challenge month.

Why a post a day for a month? Left to my own devices I would only blog occasionally, this is because it’s not something I’m accustomed to doing and it’s not part of my routine. Since I don’t do it very often the tools I need aren’t readily available to me when I do want to write a blog. The goal of the challenge is to help make this a routine part of my day long enough that I become accustomed to it. As well I’ll develop both the tools and skills to make this easy enough to do on a regular basis. Since the goal of this blog is to help me maintain prepared material on the topics relating to information security I have plenty of material to work from both historical and current.

I hope this month will provide me with a base of articles on the blog but the bigger goal is to accelerate my blog writing so I’ll have the ability to articulate myself clearly and quickly. I’ll be posting here and on my personal blog keithseymour.com, my wife is posting on our personal blog geekyexplorers.com or her rescue site ilovemyrescue.com. I hope you will drop by to see us during the month.



The Defenders Dilemma

The dilemma of the defender is that the attacker is able to purchase or pirate the software protections that we use to defend our networks. In it’s simplest form virus authors are able to buy, test, or steal antivirus products and keep them updated. The ability to test their product (virus) against our defenses ensures they will be successful placing their virus products in our environment.

In a limited sense this is not an impact to defense, generally signature based antivirus is considered a dead technology and it’s installed to meet regulatory compliance. This vulnerability extends to every commercial product you can purchase, attackers are able to purchase any commercial product with the default ruleset installed and test their attacks against it.

Extending the understanding of access to the products, the attacker can easily test the product itself for security vulnerabilities. While this might seem a bit ridiculous many security products receive little attention  for vulnerabilities because it’s assumed that security products would bave few if any vulnerabilities. It’s almost a blindness that security groups have for the products they aquire.

Comparatively there are very few security products in the market over business software, for most security products there is a small market. This means the attackers return on investment is higher for security software compared to most business software. If the attacker is targeting your business, it’s even easier because they can normally identify the software packages you are using from the resumes or interviews of your current and past employees.

What can you do to protect yourself from these issues? Test the security related software you purchase at least as well as any other package your purchase. As well if there are default rules built into your products ensure you have customized these rules to be relevant to your environment. If you can prevent it, don’t expose products to the Internet.




Zero Day Attacks As More Than Vendor Hype

The discussion of zero day attacks is filled with vendor claims and mis-information. The reality of zero days isn’t nearly as exciting as the news and vendors make it sound, but understanding the difference between the reality and the hype is critical for planning your cyber defense. In most uses the term ‘zero day’ has become an over used, no meaning marketing buzzword. Beside that it’s a very important concept and one that you need to understand if you are going to provide security.

A zero day is a vulnerability that is known by at least one person but hasn’t been reported publicly yet. When you think about it, it makes sense. Bugs are mistakes made when writing software, as such they are unknown until someone finds them. When a security researcher finds a bug (vulnerability) in software there are a number of things she could do with that information. She could contact the author of the software and notify them of the problem, she could sell the information to any number of commercial vulnerability vendors, she could sell it to a government or she could use it to break into systems hoping to steal information of value.

When planning your cyber defense you have to consider that applications have vulnerabilities you don’t know about but an attacker does. In planning your defenses you have to consider how to structure your applications so when an attacker breaks into a the first server they don’t have access to everything you are trying to protect. While it might sound like an impossible task system and network administrators have been planning for the unexpected failure for years.

An example would be a web application that you expose to the Internet to offer information and products to your customers. You store the data about these transactions in a database. You could put both of these components on the same server but when an attacker breaks into your web application they would also have access to the database. If you place the database on a separate server so when the attacker breaks into the web application they won’t immediately have access to the database . Of course it’s necessary to take precautions to know if an attacker breaks into the web application so you will be able to stop them before they get to the data in your database.


IP Address for Programmers

Or Why an IP Isn’t Like a Postal Address

I find it disturbing that the first analogies for an IP address compared it to a street address for a building. An IP address is more like a cell phone number of a resident in an apartment house. This series of blogs will cover various addressing schemes and works to identify the best way to identify hosts on the network for compliance and management. I have a vested interest in the subject because I work in information security (INFOSEC) and need to accurately track risk in an enterprise, my ability to do this is tied directly to the ability to track and report on network hosts. If your writing an app to manage or report on hosts on a network please read this and learn – it’s my requirements for any vendor in this space.

Why do we need this? Traditional tools track IP addresses like every device has one and only one ip and it never changes. This is so far from the case it isn’t funny but in reporting risk we must report accurately. For any report going to management discussing hosts on the network we need to be able to provide precise numbers and measures of effort. In order to provide this we need to be able to accurately track hosts and services (applications, databases, etc). To track those we need to automate the identification of these hosts and be able to track their attributes correctly.

This series of posts will discuss the issues with various identifiers for hosts and services and the solutions that can be used to track these assets across an enterprise. When we’re done we will have a solution that encompasses every use case and correctly represents every asset in your enterprise. We will also provide network drawings representing problems and solutions, links to more information, as well as a glossary of terms to make the conversation accessible to everyone. The biggest surprise to me is that most people working in the industry don’t have the language to discuss these problems much less the ability to precisely manage the hosts.