Defending against Malware has, up until now, focused on reactive technologies: intrusion detection, content filtering, detecting and blocking malware, etc.
There is an ongoing argument as to how effective those reactive technologies are. There's no argument about the fact that most of these solutions require very competent operators: without a good administrator, an intrusion detection solution is meaningless.
This discussion is not going to be about how good malware attack detection and blocking solutions really are, there are some excellent products out there, but more that they are all reactive. They must do a perfect job and block 100% of the attacks, or the web site will be infected. With the amount of attacks conducted today defense perfection is a difficult task.
The number of different attack signatures currently being used recently doubled from 600K to over 1,600K - in just one year. This is following a multiyear, exponential rate of attack signature growth that is swamping the reactive solutions and their ability to find, and include each signature in their databases.
Malware attacks are almost entirely an automated activity. The days where a lone hacker decides to attack a single site are over. The goal is to use search and destroy programs to find thousands of vulnerable computers into which malware can then be installed. The goal? Build a botnet; a large network of computers that is ready to do the bidding of the controller.
The goal of a botnet operator is to quickly get as many compromised machines as possible, and he cares very little about who the victims are. This means the 'low hanging fruit' - the machines that are easiest to attack - will be compromised and the sites and servers that are even slightly harder to crack are skipped.
In the real world context of automated attacks, an excellent protection strategy consists of making your site and network less vulnerable than others. By identifying and eliminating your underlying vulnerabilities instead of attempting to detect and block 100% of the attacks against them you make your network harder to attack than hundreds of thousands of others who have left their vulnerabilities in place.
By addressing this relatively small set of vulnerability issues, you can easily cause the attacker (typically an automated 'bot') to move to their next target in the target list rather than trying harder to penetrate you. This avoids the need to play Russian roulette by trying to identify and block every attack signature before it can carry malware into your machine and disable your defense perimeter.
Making machines less vulnerable is not difficult. Botnets use relatively few, known vulnerabilities to attack (more on that later), and those vulnerabilities could be checked for and plugged relatively easily by finding and installing a missing patch, changing a vulnerable configuration, tightening up web applications, etc. A bot trying to attack a network with no high or medium risk, known vulnerabilities will be unsuccessful and will swiftly move on to the next target. From your point of view (protecting the organization you are responsible for) the task is accomplished.
Vulnerability Assessment and Management has been a major pillar of network security in enterprise, Class A networks for many years. Within just the last couple of years, medium and even small businesses are discovering the common sense of fixing their relatively few vulnerabilities rather than erecting more and more defenses to keep them from being attacked.
Vulnerability Assessment tools, like AVDS, scan every node on a network on a frequent, regular basis. Doing a penetration test, or having a security consultant scan your network once a year, every 6 months or even every 3 months doesn't cut it. They must be done regularly; on a weekly or at the very least monthly basis. The reason is obvious - Microsoft alone discloses a boatload of vulnerabilities every month (on "Patch Tuesday"), every one of which can affect your organization and open a potential security risk. But on top of that - networks are dynamic. Someone changing the firewall configuration can accidentally create an opening for an attacker.
We strongly believe that periodic vulnerability scans, coupled with even basic malware detection and blocking, will be enough to prevent an organization from being compromised and becoming a part of a botnet - not because either method of defense alone leads to absolute protection, but because they harden the organization enough for the botnet operator to simply give up and move on to their next, weaker, target.
A quick note about known vs. unknown vulnerabilities. While it is true that some malware attacks utilize "zero-day" vulnerabilities (attacks that have just been discovered and are referred to as 'unknown vulnerabilities') these attacks are a tiny minority. The reason is that 'zero day', unknown vulnerabilities are hard to discover and are thus expensive and relatively few in number.
Computers that have been infected (zombies) are so numerous that there open market value is currently 4 cents (US). If I have information on how to compromise a network that nobody else knows about, would I waste it adding zombies to my botnet? No - I would sell it on the open market (where I can fetch $10,000-$100,000 easily for this information) or use it to compromise a lucrative target such as a bank, sensitive government network, or similar high value target. The fact of the matter is that close to 100% of the successful malware and botnet-related attacks use known vulnerabilities.
In summary, while it is 'sexy' to talk about reactively detecting and blocking attacks, it is impractical, reactive and often impossible to do without expensive technical expertise. It is much cheaper and effective to be proactive and run periodic vulnerability scans to detect the relatively easy to find known vulnerabilities that are used to break into the network, and plug those holes before they are used by attackers.