================
File 9EPILOG.TXT
  Conclusion
================
Remark: no changes since last test (2001-10).
 

When reporting scanner problems, we have tried to be as fair as 
possible. Concerning problems when scanners didnot touch, without
any error message or other indication, all objects of a testbed, we 
determined those parts which had not been scanned and started up to
two postscans. 

Generally, we assume that AV producers do their best to support
their users with products of as good quality as they can achieve. We
therefore wish to help them determining the state-of-quality and to 
improve the products and thus help their customers.
 
Second, though we have done our best to assure that our virus data-
bases are relevant and that our test procedures are fair, we cannot 
be absolutely sure that we have not made any fault (no scientist can 
ever be absolutely sure.)

We are esp. aware that our test has a *rather limited scope*, esp.due
to the following inherent decisions:

   1) We have tested ONLY on-demand scanners. Many products offer
      broader functionalities as they include resident components,
      integrity checkers etc. 

   2) We did NOT test the ability of scanners to detect viruses in 
      memory, and we also didnot test whether cleaning is done 
      successfully.

   3) Concerning polymorphic viruses, this test contains a large 
      number of static samples. Presently, we are preparing a dynamic
      test (where we generate multiple generations) but this method
      is not yet mature for publication.

   4) We have tried our best to have only real viruses in the resp. 
      databases. In cases where selected AV producers were convinced
      that non-detected samples were non-viral, and if we could not
      prove their virality immediately, we have tentatively deleted 
      related entries from our testbeds and results. We are analysing
      the virality (or non-virality) of such files before next test.
      
      Generally, it is not always easy to prove viral characteristics
      of some malware. There are not only limitations in man-power 
      and time, but self-replication may depend on hardware, system 
      and other details. 

      In general, VTC - as other AV institutions - does admittedly 
      NOT have the capacity, with respects to personell, engines and 
      time, to analyse EVERY suspicious code for detailed features, 
      including virality and malicity.

   5) Concerning boot/MBR infectors, VTC tests traditionally use 
      SIMBOOT which simulates boot sector behaviour to a certain 
      degree. It is well known that such simulators have inherent 
      limitations, which esp. produce inadequate results for 
      scanners which test the physical layout of a boot virus on 
      real diskettes. On the other side, it is beyond the technical 
      and human capacity for a university laboratory to test about 
      100 products against more than 4,000 diskettes with real 
      infections.

   6) Our test of AntiVirus products for "false positive" alarms is
      just a first step. We intend to publish more details esp. in-
      cluding detection of malware in packed objects (we refrained
      from publishing related results to give AV producers a fair 
      chance to overcome present weaknesses both in their detection
      of packed viral objects as well as in malware detection). 

   7) It is beyond our scope to evaluate user interfaces. Here, we 
      regard users and - to some degree - well qualified journalists 
      as adequate testers. Moreover, we refrain from reporting time 
      behaviour as our test procedure is rather untypical for user 
      requirements (we sincerely hope that users will NEVER have so 
      many infected objects as in our testbeds!).

   8) Concerning Windows-98/Windows NT/Windows-2000 tests, we received 
      some products as general "32-bit engines", applying to all 
      platforms. In such cases, we assumed that such engines run on
      all platforms, and we tested them under BOTH if possible.


With such restrictions: What are those tests good for, then? Regard-
less of the drawbacks mentioned above, we believe that our tests
are of some value.

   A) In the sequence of VTC tests, there is significant information 
      how quality of products develop. In addition, our test of 
      detection of "packed viruses" makes agaiin quite clear that 
      there is strong need for significant improvement of AntiVirus 
      products. 

   B) As users dont care whether some "unwanted" feature should be 
      classified as "viral", "trojan", "Wannabe", "Germ" or simply 
      "Malware", only few products help protecting customers against
      such threats; we also regret that some AV producers seem not
      sufficiently conscious about these threats.
      
   C) A rather valuable part is the naming cross-reference. It can 
      help AV producers to become compliant with the CARO virus 
      naming scheme, and it can be used to figure out which virus 
      they have exactly when their favorite scanner reports a 
      specific name.

   D) Test results provide some general impression of how good a 
      scanner is at detecting viruses. Morover, a by-product of our
      "fairness" (having frozen viral databases some 8 weeks ago) 
      produces some information about quality improvements of those 
      scanners for which several versions were made available during
      the test period.
   

Generally, we will be very interested to learn about any comment 
about our approach and test method as this may help us to improve 
test procedures for future tests and achieve a higher level of 
quality where possible. But every critical remark should have in mind
that we are only able to test the behaviour of any product based on
information made available from its manufacturer. We have no insight 
into how the products work, and we have not tried to reverse-engineer 
any product to understand experienced problems. We therefore just 
report such problems and ask manufacturers to analyse their problems 
themselves; sufficient information concerning tests protocols have
been made available from us (see SCAN-RES), but we are prepared to 
help AV producers upon special request where possible, so that they 
can support their customers by improving the quality for their 
products.

At last, we would like to express our hope that the users will find
this test and its extensive documents useful.

