-
[ Pobierz całość w formacie PDF ]
Antivirus Software Testing for the New Millenium
Abstract:
The nature of technology is changing rapidly; likewise, the nature of viral threats
to the data dependent upon the technology is evolving. Thus, the technologies we
rely upon to provide protection from these threats must adapt. In the last twelve
months, several anti-virus software vendors have announced exciting new
technologies which claim to provide “faster, better, cheaper” response to computer
virus incidents within organizations. However, there is currently little guidance
regarding the best way to evaluate the efficacy of such claims. Faster than what?
Better than what? Less costly compared to what? Clearly, there can only be one
technology which is “faster, better, most cost efficient" than all of the others, yet if
the advertising claims are to be believed, all products are not merely created equal,
they are all created superlative!
In this paper, the requirements for these next generation anti-virus systems will be
examined. There will be a discussion of reviewing strategies that can help to
determine to what extent those requirements have been met. To this end, the
problem will be approached from a functional perspective, not gearing the test
design to particular implementations. In this way, an array of tests will be created
which are not vendor or product specific, but which can and should be employed
industry-wide.
Authors:
Sarah Gordon (sgordon@format.com)
IBM Thomas J. Watson Research Center, U.S.A,
Fraser Howard (fph@format.com)
Virus Bulletin, U.K.
Point of Contact:
Sarah Gordon
sgordon@format.com
sgordon@dockmaster.ncsc.mil
Keywords: computer virus, anti-virus product testing, anti-virus product
certification, testing methodology, testing criteria, functional requirements.
Antivirus Software Testing for the Year 2000 and Beyond
Sarah Gordon (sgordon@format.com)
Fraser Howard (fph@format.com)
Introduction
In the last twelve months, several anti-virus software vendors have announced
exciting new technologies which claim to provide “faster, better, cheaper”
response to computer virus incidents within organizations [Anyware, 2000; NAI,
2000; PC-Cillin, 2000; Symantec, 2000a; Symantec, 2000b; Thunderbyte, 2000;
Trend, 2000].
However, there is currently little guidance regarding the best way to evaluate the
efficacy of such claims. Faster than what? Better than what? Less costly compared
with what? Clearly, there can only be one technology which is “faster, better, most
cost efficient" than all of the others, yet if the advertising claims are to be believed,
all products are not merely created equal, they are all created superlative!
In this paper, the requirements for these next generation anti-virus systems will be
examined. There will be a discussion of reviewing strategies that can help to
determine to what extent those requirements have been met. To this end, the
problem will be approached from a functional perspective, not gearing the test
design to particular implementations. In this way, an array of tests will be created
which are not vendor or product specific, but which can and should be employed
industry-wide.
The State of the Nation: Anti-virus testing in the 90’s
Antivirus product testing has improved greatly since the simple zoo scanning
offered in the first published reviews. Many, if not most, of the technical and
administrative problems documented in [Gordon, 1993; Laine, 1993; Tanner,
1993; Gordon, 1995; Gordon & Ford, 1995; Gordon & Ford, 1996; Gordon, 1997]
have been resolved. Today’s tests provide a solid, albeit not perfect, measure of
product capabilities.
As tests have become more complex, several bodies have emerged as leaders and
innovators in this area. Some of the more widely-accepted tests
1
within the
industry are outlined briefly below:
(i) ICSA Certification
The
International Computer Security Association
(
ICSA
)
has been performing tests
of antivirus software since 1992; many popular products are submitted to their
various for-fee certification schemes (ICSA 2000). On-access and on-demand
scanning are part of their ever-expanding certification criteria; criteria for virus
removal were added in July 1999. Primary detection tests are broken into two main
sections: In the Wild virus detection, and zoo virus detection. The zoo collection,
maintained by
ICSA
staff is large and fairly complete; products must detect at least
90% of these viruses. Tests on In the Wild viruses now use samples that have been
replicated from The WildList Organization’s WildCore sample set. These viruses
have been confirmed as being an active threat in the user population. To be
certified by
ICSA
, products must detect 100% of these viruses, using the version of
The WildList that was released one month prior to the test date. Additionally, a
“Common Infectors” criteria ensures that any viruses ICSA feels are important are
dealt with in a way ICSA considers appropriate. False alarm testing was added to
their testing processes in 1999; Gateway Product criteria were established in July
1998; MS Exchange and Lotus Notes Criteria are being drafted at this time [
ICSA
,
1999]
(ii) Westcoast Labs Checkmark
Westcoast Publishing
established itself as a world leader in the testing and
certification of antivirus software products in the mid-1990s with its introduction
of the
Westcoast Labs Checkmark
. Test criteria depend upon the level of
certification applied for. Level One measures the ability of the tested product to
detect all of the viruses In the Wild, using samples based upon the edition of The
WildList not less than two months prior to the product release date. At Level Two,
products must also disinfect these viruses. In addition, the Level Two tests use the
version of The WildList that was published one month prior to the product release
date. Both tests are carried out using viruses replicated by West Coast, thus
measuring the ability of products to detect viruses which constitute the real threat.
Many popular products are submitted to this for-fee testing scheme; certified
products are announced on a regular basis (Checkmark 2000).
(iii) University of Hamburg VTC Malware Tests
Overseen by security and antivirus expert Dr. Klaus Brunnstein, students from the
Virus Test Center
(
VTC
) at the University of Hamburg have been designing and
performing tests of antivirus software since 1994. The results of these projects are
made freely available to the general public. These tests have grown from simple
1
Not all products qualify for testing under all schemata.
tests of boot
2
and file virus detection in 1994 to the current comprehensive virus
(and malware) tests. In addition to their extensive zoo collection, in early 1999, the
VTC
began using samples replicated from
The WildList Organization’s In the Wild
collection in their tests
for In the Wild
viruses, thus assuring (with the exception of
boot sector tests) an accurate representation of a product’s ability to meet the real
threat from these In the Wild viruses.
Testing documentation states that users cannot distinguish whether such
malevolent software is “just viral” or otherwise dangerous (VTC, 2000a). Thus,
the detection of more general forms of malicious software has become a major part
of the tests – a decision based upon
VTC’s
perception of user requirements. These
malware tests were initiated in 1998, quickly followed by false-positive testing.
While the tests are free, some products are excluded due to various conflicts cited
by Professor Brunnstein (VTC, 2000b).
(iv) University of Magdeberg
Andreas Marx and his antivirus testing projects for the Anti-virus Test Center at
the Otto-von-Guericke University of Magdeberg, done in cooperation with GEGA
Sofware and Medienservice, are relative newcomers to the antivirus testing scene.
The tests, sponsored by antivirus companies, provide magazines such as as CHIP,
FreeX, Network World, PC Shopping and PC-Welt with results; results from these
tests have been included in their published reviews. There are seven people
involved in the testing process – some students, and some working for the
University. According to Marx, most of the test criteria have been chosen by
network administrators, users, magazines, AV companies and the University.
These criteria include detection of In the Wild viruses (Products using the most
current WildList), and disinfection (of non-boot sector viruses only). Additionally,
non-viral malware tests are carried out as well. Results are made available in both
English and German. Participating vendors pay approximately $300.00 USD per
product for testing, all of which is funneled back into the testing project.
(v) Virus Bulletin
Virus Bulletin
(
VB
) has been testing anti-virus products since the publication
started in 1989. Products for the various platforms are reviewed regularly in the
VB
Comparative Reviews, which test the products against a zoo collection (standard
DOS and
Windows
file infectors, macro viruses and polymorphic viruses) as well
as a recent In the Wild set. For each comparative, the Virus Bulletin In the Wild
set is based upon a version of
The
WildList
announced approximately two weeks
prior to the product submission deadline; samples replicated from
The WildList
Organization’s
reference collection are used. In January 1998, the
VB100%
award
scheme was implemented. This award is given to products that detect all of the In
2
VTC does not use real viruses in boot sector virus testing; they use image files. Their rationale is
that it is too time consuming to replicate real boot sector viruses. Their boot sector virus tests are
unreliable measures of a product’s ability to detect real boot sector viruses. All other testers
mentioned do use real viruses in boot sector virus testing.
the Wild file and boot viruses during on-demand scanning. The scheme has grown
since then, and now demands complete
In the Wild
detection during both on-
demand and on-access scanning. Aside from the detection rate tests, the
VB
comparative reviews also perform tests on scanning speed, on-access scanner
overhead and false positive rate. In fact, the “no false positives” criterion is to be
introduced into the
VB100%
award scheme for reviews published from June 2000
onwards.
Why this isn’t enough from a User Perspective
Given that anti-virus tests have improved dramatically over the last several years
as the expertise of the reviewing community has increased, the need for more
comprehensive tests may seem unclear. In this section, reasons why tests must
move to the next level will be examined.
The much-needed introduction and subsequent development of
The WildList
as a
testing criteria provided a reality check to the antivirus industry. With this
criterion, users now have a minimum baseline of what any competent
(appropriately updated) anti-virus product
should
detect. This criterion is the
cornerstone of meaningful antivirus software testing. Indeed, some testers have
moved to a one-month WildList, citing the need to show users the ability of
products to respond quickly to the ever-changing threat. However, the increased
threat from fast-spreading viruses such as Melissa and LoveBug, underlines the
need for yet another, more complex shift in focus within testing environments.
The testing industry has also moved forward with various tests related to
disinfection, and on-access performance of scanners. As mentioned above, the
VB100% Certification
offered by
Virus Bulletin
recently added on-access tests to
their arsenal of stringent antivirus software metrics; both
ICSA
and
WestCoast
Labs
have recently implemented disinfection tests. While tests are far from
complete, they mirror development of the early In the Wild testing.
Clearly, tests are continuing to advance as the industry matures. However, it is not
enough to merely expand. As products become increasingly complex, more
complex tests are required. This rapidly becomes cost prohibitive; thus, it is
important to expand the testing methodology in the areas that are most important
to user protection, using metrics that are meaningful both to the users and
developers. We propose a new, functionality/requirements based approach, which
fulfils the above requirements, and provides excellent return on investment for test
costs.
Consider a typical anti-virus product. In essence, most of the protection provided
by the product is static: that is, the philosophy behind the product is to detect and
(sometimes) remove viruses that are
already known
to the creators of the anti-virus
product. However, as was so clearly demonstrated by the explosion of Melissa
infections, such an approach is not without risk: as computers become increasingly
interconnected, the potential for viruses which spread faster than detection and
removal solutions for them can be disseminated is great.
[ Pobierz całość w formacie PDF ] - zanotowane.pl
- doc.pisz.pl
- pdf.pisz.pl
- wzory-tatuazy.htw.pl