                                     
                                     
                                     
                                     
                                     
                                     
                                     
                                     
                                     
                                     
                                     
                                     
          ANTIVIRUS SCANNER ANALYSIS BASED ON JOE WELL'S LIST OF 
                     PC VIRUSES IN THE WILD 7/1997
                                     
                              Marko Helenius
                                     
    Virus Research Unit, University of Tampere, Department of Computer
   Science, P.O.BOX 607, 33101 TAMPERE, FINLAND, Tel: +358 31 215 7139,
               Fax: +358 31 215 6070, E-mail: cshema@uta.fi,
      http://www.uta.fi/laitokset/virus, ftp://ftp.cs.uta.fi/pub/vru




This  paper  introduces  our  methods  of  evaluating  computer  antivirus
products' virus detection capabilities and results of an antivirus scanner
analysis carried out at the Virus Research Unit during the year 1997.  The
analysis  was  performed  with DOS-, Windows 95,  Windows  NT  and  memory
resident  versions of the scanners. The test set was based  completely  on
Joe  Well's  list of PC viruses in the wild 7/1997. I have also  tried  to
think what a reader of the results should be aware of.


ACKNOWLEDGEMENTS

Permission  is granted to distribute copies of this information,  provided
that  the contents of the files and information is not changed in any  way
and the source of the information is clearly mentioned. For republication,
permission  must  be  obtained  from the Virus  Research  Unit.  To  avoid
publishing  misleading information those who wish  to  quote  the  results
should discuss the matter with the Virus Research Unit.

A  lot of co-operation with computer antivirus researchers was required to
accomplish  the  analysis. I would specially like to thank  the  following
persons, who have been of great help when carrying out the analysis.

Amir Elbaz, Eliashim Microcomputers Ltd.
David Chess, IBM T.J.Watson Research Center
Dmitry Gryaznov, S&S International PLC.
Eugene Kaspersky, KAMI Group
Fridrik Skulason, FRISK Software International
Gerard Vuille, Metropolitan Network BBS Inc.
Jan Hruska, Sophos Plc.
Jimmy Kuo, McAfee Association
Karsten Ahlbeck, Karahldata
Mikael Albrecht, QA Information Security Oy
Mikko Hypponen, Data Fellows Ltd.
Pavel Baudis, ALWIL Software
Rene Visser, Symantec Peter Norton's Group
Shannon Talbott, McAfee Association
Tarkan Yatiser, VDS Advanced Research Group
Tjark Auerbach, H+BEDV GmbH
Vesselin Bontchev, FRISK Software International
Wolfgan Stiller, Stiller Research
Yury Lyashchenko, DialogueScience Inc.

1. INTRODUCTION

End  users and normal magazine evaluators are not usually able to evaluate
accurately  the  most  critical part of antivirus  products.  They  cannot
evaluate how well antivirus products can prevent or find viruses (Bontchev
1994, Helenius 1996). This is not possible, because typically they do  not
have sufficient access to viruses or sufficient skills to prepare a decent
test  bed.  This  analysis  has been prepared  to  serve  the  purpose  of
providing accurate information on how well each antivirus scanner can find
viruses  found  on  the field. However it should be  remembered  that  the
results  reflect  only  the  time  of August  1997  and  there  have  been
improvements in many scanners after that time period. In fact some of  the
scanners can now find the viruses they missed then. This does not  however
necessarily  mean,  that they would find all viruses if  a  new  antivirus
scanner analysis would be prepared based on current situation.

One  problem with antivirus scanner evaluations is, that it is  too  often
unclear  how  antivirus testers are performing the tests and what  viruses
they  are  using  in  their tests. Antivirus scanner  evaluation  is  very
challenging  and it is easy to make mistakes. In fact, I must admit,  that
even  after thorough preparing and after negotiations with other  computer
antivirus  researchers, it is possible, that even this analysis may  still
contain  mistakes. Thus I believe it should be revealed to the public  how
the  antivirus scanner analysis was prepared and what viruses  were  used.
Also  I  believe,  that  a person evaluating computer  antivirus  products
should  admit  the  lacks of his/her tests to avoid  spreading  misleading
information and it should be clear what was actually tested. To give  more
exact view of our work I have briefly presented our methods of testing and
some facts that readers should be aware of.

The  following sections present problems with preparing the test set,  how
the analysis was carried out, results of the analysis and what a reader of
the analysis should be aware of.

2. PREPARING THE TEST

The  test set was chosen to included viruses from Joe Well's list's  first
part  (Wells  1997).  The  first part contains  viruses  which  have  been
reported  as  being  in  the  wild  by at  least  two  computer  antivirus
researchers  and  thus these viruses are most likely to possess  potential
threat  for  computer  users. The test set consisted  of  88  boot  sector
viruses  infected on 272 target files, 128 file viruses infected on  10350
target files and 55 macro viruses infected on 797 document files.

2.1 EXCLUDING NON-VIRUSES

Trojan  horses, joke programs, intended viruses, first generation viruses,
innocent files and other non-viruses should be excluded from the test  set
(Bontchev  1993).  Otherwise products, which are good  at  detecting  true
viruses,  but "bad" at detecting non-viruses would have lower  score  than
they  are  worthy  of  and products, which are giving false  alarms  could
perform  well.  After  all we should be analysing how  well  products  can
detect  viruses. In this analysis a lot of work was used to exclude Trojan
horses, joke programs, droppers, first generation viruses, innocent files,
intended  viruses  and  damaged files from  the  test-set.  The  non-virus
removal  process was carried out with help of an invention implemented  at
the  Virus  Research  Unit.  The invention is  called  as  "Automatic  and
Controlled   Virus  Code  Execution  System"  (Helenius   1995)   and   it
automatically  executes virus code in controlled area and  saves  infected
areas  into specific network directory. The system is a powerful tool  for
automatic  virus replication, because it is implemented in such way,  that
it can be left to work on its own. It automatically recovers from hanging,
damage  and CMOS memory failures that execution of malicious software  may
cause.  During  the  summer 1997 the system was extended  to  the  Windows
environment and this made possible also the automatic replication of macro
viruses.

2.2 PROBLEMS WITH THE TEST SET

For a virus to be included "In the Wild" test set, it must have been found
on  the  "field"  at least once. This is not, however, as  obvious  as  it
sounds. How do we know, that a virus has been found on the field at  least
once?  Someone must have reported to some antivirus researcher,  that  the
virus  has  been  found on the field but how do we know that  someone  has
reported the virus to some anti-virus researcher. One solution is  to  use
Joe  Well's  list  (Wells 1997), which includes viruses, which  have  been
reported as found on the field according to main antivirus researchers. It
does  not, however, contain all the viruses found from the field,  because
all the cases are not reported to Joe Wells. However Joe Well's list seems
the  be  currently the only accurate source of information and it  is  the
only list for which most antivirus researchers give their support.

Another problem with Joe Well's list is that it does not have always exact
information,  which variant of the virus was found on the field.  In  most
cases  the exact variant can be identified directly, but sometimes further
examination  is  needed. This causes problems when constructing  the  test
set.   Sometimes  I  could  receive  the  original  virus  from  antivirus
researchers  but  this is not always possible. I had  to  compare  several
sources  of information between each other to determine, which variant  of
the  virus  was  "In the Wild". In most cases this comparing  process  was
successful and I could almost certainly identify the correct variant,  but
still  I  cannot  be  absolutely sure, that all the variants  were  chosen
correctly.

2.3 REPORT FILE VERIFICATION

After the scanning reports were ready, the scanning reports and percentage
calculation   logs  were  sent  for  verification  to  antivirus   product
producers.  This  caused  the analysis to be more accurate,  because  some
faults  in the original reports were revealed. For example it, turned  out
that  Windows  95  corrupts some boot sector viruses  and  therefore  some
scanners  could  not  detect  some corrupted boot  sector  viruses.  These
scanners  were  however  able  to detect the actual  working  viruses.  In
addition  one incorrect variant was revealed and some faults  in  scanning
methods with some products were corrected.

2.4 CROSS-REFERENCES

I believe, that it should be known to the public, what was actually tested
and  how we concluded the results in the analysis. This is the reason  for
preparing  cross-references, which clearly show which  sample  files  were
found  by  which products. First we are using awk-scripts to organise  the
report  files  into analogous format. After this we are using  a  specific
program, which unites the report files as a cross-reference.

3. RESULTS OF THE ANALYSIS

The following sections describe the results and virus scanning methods  of
the  analysis. The detection percentages were calculated so that for  each
virus  an average of detection was calculated. In other words if a scanner
could  detect only part of the sample files of a certain virus, an average
of the detection was calculated. A drawback of this method is that it does
not  perfectly  take into account that a partly detected virus  may  cause
trouble for a user, because undetected files may cause reinfection of  the
virus.  On the other hand, even unreliably detected virus does get caught.
This  slows  down  the  spread of the virus and thus unreliable  detection
should   be  taken  into  account.  Anyway,  because  estimating  reliable
detection would be too unsure, the average count method was chosen.

3.1 VIRUS DETECTION ANALYSIS OF DOS SCANNERS

The  analysis of on-line DOS-scanners was carried out against file,  macro
and  boot sector viruses. File and macro virus detection capabilities were
analysed  by executing DOS-scanners from batch files by using the switches
presented in table 1.

Product              Command line
Avast 7.70           LGUARD [PATH] /P /S /R[REPORT]
AVP 3.0              AVP /W=[REPORT] /S /Y /Q [PATH]
Dr. Solomon 7.74     FINDVIRU [PATH] /REPORT=[REPORT] /LOUD /VID
Dr.Web 3.24          DRWEB [PATH] /CL /RP[REPORT]
F-PROT 2.27a / 3.0   F-PROT [PATH] /NOWRAP /LIST /REPORT=[REPORT]
IBM Antivirus 3.0    IBMAVSP -LOG[REPORT] -PROGRAMS -VLOG -NB -NREP -NWIPE 
                     -NFSCAN [PATH]
Norman 4.20          Scan executed from the graphical user interface
Integrity Master     IM /NOB /NE /VL /REPA /1 /RF=[REPORT]
Norton Antivirus 3.0 Scan executed from the graphical user  interface
McAfee Scan 3.0.3    SCAN /REPORT S2[REPORT] /RPTALL /NOMEM /SUB [PATH]
Sweep 3.00           SWEEP -ALL -REC -NK -NAS -NB -P=[REPORT]
Thunderbyte 8.02     TBSCAN [PATH] largedir expertlog noautohr batch log
                     logname=[REPORT]
Virusafe 7.5         VREMOVE [PATH] /R /C /D
H+BEDV Avscan 3.72a  AVSCAN [PATH] /LH[REPORT] /Q /S
Iris Antivirus       CURE [PATH] /R:1 /O
    Table 1: Command line switches when analysing on-line DOS-scanners
                                     
Detection  of  boot  sector  viruses was  analysed  with  help  of  Dmitry
Gryaznov's  Simboot, which emulates infected floppy diskettes  by  writing
infected diskette images to memory and by assigning a memory segment as  a
floppy drive (Gryaznov 1994). If there were viruses, which were not  found
or  if  there were differences between DOS and Windows 95 bootsector virus
detection results, these cases were investigated manually.
Table  2  presents the results of  the on-line DOS-scanner virus detection
analysis.

DOS-scanner     Boot sector viruses (%)  File viruses (%)  Macro viruses (%)
Avast 7.70 (25.8.1997)        100             100               90.91
AVP 3.0 (2.9.1997)            100             100               98.18
Dr. Solomon 7.74              100             100               100
Dr.Web 3.24                   96.59           97.2              100
F-PROT 2.23a                  100             99.15             100 (45.81)
H+BEDV Avscan 3.72a           100             94.65             80.00
IBM Antivirus 3.0 (15.8.1997) 98.86           99.99             99.86
Integrity Master 3.21a        97.73           92.07             91.85
Iris Antivirus 22.00          100             94.97             94.47
McAfee Scan 3.0.3             98.86           99.74             100
Norman 4.20 (3.9.1997)        100             100               97.13
Norton 3.0 (14.8.1997)        98.11           98.44             100
Sweep 3.00                    100             100               94.55
Thunderbyte 8.02              100             98.2              96.22
Virusafe 7.5 (13.8.1997)      99.24           96.84             87.05 (85.45)
             Table 2: Virus detection analysis of DOS scanners

There  are  two detection results for Virusafe, because as a  default  the
evaluated  version did not search for viruses in XLS-files.  Users  should
define  the  extention  before Excel viruses can be found.  The  extention
should  be  fixed correctly in later releases of the product. With  F-PROT
the  main  emphasis is with the F-MACROW program, which  is  designed  for
macro  virus detection and removal and finds all viruses included  in  the
test bed. The other percentage is there just to point out, that F-PROT.EXE
program should not be used for reliable macro virus detection.

3.2 VIRUS DETECTION ANALYSIS OF MEMORY RESIDENT SCANNERS FOR DOS

Memory  resident scanners were analysed against file and macro viruses  by
copying files when the memory resident part of a product was activated. In
case  of Avast, Dr. Solomon's Antivirus Toolkit, IBM Antivirus and  McAfee
Scan  memory resident scanners were analysed in addition by file execution
method.  These products can detect more viruses when actual virus code  is
executed. Due to the automatic and controlled virus code execution  system
even  the file execution method was possible. It was assumed that  if  the
product  could  prevent  change of the executable  system  areas  and  the
product could prevent execution of the infected file, the virus was  found
or  otherwise the product could not completely find the virus. Boot sector
virus tests of memory resident scanners were carried out by writing images
of  infected  diskettes  on floppy diskettes and then  attaching  infected
diskettes  with  the  "CHKDSK"-command. Table 3 presents  results  of  the
memory resident scanner analysis.

Memory resident scanner Boot sector virus(%) File virus(%) Macro virus(%)
Sweep 3.00, Intercheck             100       100            0
Avast 7.70 (25.8.1997), Rguard     100       96.8           0
Dr. Solomon 7.74, Virus Guard      100       90.47          0
Virusafe 7.5 (13.8.1997), VS       98.86     92.89 (87.02)  83.41 (81.82)
Norton 3.0 (14.8.1997), NavTsr     96.97     73.44          0
Thunderbyte 8.02, TbScanX          89.77     56.19          0
McAfee Scan 3.0.3, Vshield         85.44     73.29 (66.51)  0
F-PROT 2.23a, Virstop              81.82     58.4           0
IBM 3.0 (15.8.1997), Ibmavdr       70.45     72.5           89.09
 Table 3: Virus detection analysis of memory resident scanners for MS-DOS

McAfee Association's Vshield was analysed both with and without the  /POLY
switch activated. The better detection result is with the /POLY switch and
the other one without the switch. There are two detection results also for
the Virusafe's memory resident scanner. In case of file viruses the better
detection result is with a full virus table loaded and the other one  with
a  common virus table. In case of macro viruses the evaluated version  did
not  search  for viruses in XLS-files and the better detection  result  is
with  the  XL?-extention.  In addition to the  analysed  products,  Avast,
Norman Antivirus and Iris Antivirus have behaviour blockers.

In most cases memory resident scanners cannot detect as many viruses as on-
line  scanners.  Table  3  presents results  of  memory  resident  scanner
analysis. The only exception seems to be Sophos Intercheck, which actually
relies  on the DOS-scanner with standalone installation or on the  Netware
scanner with network installation. Another general note is, that there are
only  two memory resident scanners, which can detect macro viruses.  There
are  usually  better scanners for the Windows environment  and  thus  most
producers  do  not  wish  to strain their DOS-versions  with  macro  virus
detection capabilities.

3.3 VIRUS DETECTION ANALYSIS OF WINDOWS 95 SCANNERS

Windows  95  scanners  were analysed against file  and  macro  viruses  by
executing the scanning from the graphical user interface. The scanning was
performed  with default options. Some options may have been  changed  like
preventing  some scanners from cleaning viruses and log creation  options,
but  such  options,  which could affect virus detection capabilities  were
left  as  their  default position. There are however two  exceptions.  Dr.
Solomons  Antivirus Toolkit was scanned with /VID option  and  Thunderbyte
antivirus  with  low  heuristic sensitivity. The reason  for  this  is  to
prevent  these products from increasing their heuristic sensitivity  after
few  viruses  were  found.  The boot sector virus  detection  analysis  of
Windows 95 scanners was performed by writing diskette images one by one on
floppy  disks  and  usually  launching the  scanning  from  the  graphical
environment. The keyboard controller part of the Automatic and  Controlled
Virus  Code  Execution System was utilised for automating this task.  Some
products  made possible to launch the scanning from the command  line  and
this  was  utilised  when possible. Table 4 presents the  results  of  the
Windows 95 scanner analysis.

Windows 95 scanner    Boot sector viruses(%) File viruses(%) Macro viruses(%)
Avast 32 (25.8.1997)                100          100         86.77
AVP 3.0 (2.9.1997)                  98.86        100         98.18
Dr. Solomon 7.74                    100          100         100
F-PROT Professional 3.0 (23.8.1997) 100          99.06       100
H+BEDV Antivir 1.02                 97.73        93.91       80.00
IBM Antivirus 3.0 (15.8.1997)       98.86        99.99       99.86
IRIS Antivirus 22.00 (13.7.1997)    100          94.97       94.47
McAfee Scan 3.0.2                   100          99.74       100
Norman 4.20 (3.9.1997)              100          100         97.13
Norton Antivirus 2.0.1 (14.8.1997)  96.97        98.44       100
Perforin for Winword 2.0b           ------       -----       89.09
Sweep 3.00                          100          100         94.55
Thunderbyte 4.00                    100          98.36       96.22
Virusafe 7.5 (13.8.1997)            98.11        96.82       87.05
          Table 4 Virus detection analysis of Windows 95 scanners

In most cases Windows 95 versions could detect the same number of viruses
as the DOS version.

3.4 VIRUS DETECTION ANALYSIS OF MEMORY RESIDENT SCANNERS FOR WINDOWS 95

Analysis  of  memory resident scanners for Windows was  performed  against
file  viruses, macro viruses and boot sector viruses. File and macro virus
detection  capabilities  of the Windows 95 memory resident  scanners  were
analysed  by  copying files. Only Avast for Windows  95  was  analysed  by
executing files. Memory resident scanners for the Windows environment were
analysed  by  writing  infected diskette images on  floppy  diskettes  and
accessing the diskettes with the "CHKDSK"-command.

Memory resident scanner  Boot sector virus(%) File virus(%) Macro virus(%)
Avast 32 (25.8.1997)                   100         99.22          0
Dr. Solomon 7.74                       100         100            100
F-PROT Professional 3.0 (23.8.1997)    100         99.11          90.91
IBM Antivirus 3.0 (15.8.1997)          74.43       72.5           94.41
McAfee Scan 3.0.2                      98.86       99.74          100
Norton Antivirus 2.0.1 (14.8.1997)     96.97       98.44          100
Sweep 3.00 (Intercheck)                100         100            94.55
Thunderbyte 4.00                       100         98.2           96.22
Virusafe 7.5 (13.8.1997)               98.86       94.7           85.45
Table 5: Virus detection analysis of memory resident scanners for Windows 95

Most analysed scanners seemed to work as well as Windows 95 scanners.  The
only  clear  exception seems to be IBM antivirus, which could detect  less
viruses  than Windows 95 version of the scanner. The analysed  version  of
Avast  32  could  not find macro viruses, but the current version  of  the
product  should  have  this  capability. F-PROT  Professional's  detection
result of macro viruses was lower, because the analysed version could  not
find  Office 97 macro viruses. The current version of F-PROT's  Gatekeeper
should have this capability.

3.5 VIRUS DETECTION ANALYSIS OF WINDOWS NT SCANNERS

Likewise  Windows  95  scanners, also Windows NT  scanners  were  analysed
against  file  and  macro  viruses  by executing  the  scanning  from  the
graphical user interface. The scanning was performed with default options.
Some  options  may  have been changed like preventing some  scanners  from
cleaning  viruses and log creation options, but such options, which  could
affect  virus detection capabilities were left on their default  position.
Again  Dr.  Solomon's Antivirus Toolkit was scanned with /VID  option  and
Thunderbyte antivirus with low heuristic sensitivity selected. Because  of
tight schedule, boot sector virus detection capabilities were not analysed
in the Windows NT environment.

Windows NT scanner               File viruses(%) Macro viruses(%)
Avast 32 (25.8.1997)                  100              86.77
AVP 3.0 (2.9.1997)                    100              98.18
Dr. Solomon 7.74                      100              100
F-PROT Professional 3.00 (23.8.1997)  99.06            100
H+BEDV Antivir 1.02                   93.91            80.00
IBM Antivirus 3.0 (15.8.1997)         99.99            99.86
McAfee Scan 3.0.3 (20.8.1997)         99.74            100
Norman Virus Control 4.20 (3.9.1997)  100              97.13
Norton Antivirus 2.0.1 (14.8.1997)    98.44            100
Perforin for Winword 2.0b             -----            89.09
Sweep 3.00                            100              94.55
Thunderbyte 4.00                      98.36            96.22
Virusafe 7.5 (13.8.1997)              96.82            87.05
         Table 6: Virus detection analysis of Windows NT scanners

Each  analysed  Windows NT scanner could detect as  many  viruses  as  the
Windows 95 version of the product.

4. DISCUSSION AND CONCLUSIONS

A  lot  of  work was assigned to carry out everything as well as possible.
Accomplishing an anti-virus scanner analysis requires a lot  of  work  and
still  there  is  always  something to improve.  Also  this  analysis  has
drawbacks,  which  a  reader  of the results  should  be  aware  of  while
examining  them.  First  of  all performance of checksum  calculation  and
active  monitoring programs were not analysed. Also products' disinfection
capabilities were not examined. A thorough analysis should also include  a
false alarm rate test. Because of restricted time there was no false alarm
rate  test  in this analysis. In addition, the tests were not carried  out
while viruses were memory resident although this is often the case, when a
computer  is  infected  with a virus. It should be also  noted,  that  all
viruses found on the field were not included, because only viruses,  which
were  in Joe Well's list's first part were included. In addition, we might
have done a mistake while checking correct variants of the viruses in  the
test  set.  Also  it should be noted, that we did not try to  measure  how
common each virus is and thus the percentages do not directly measure  the
actual risk level of infection. Instead the percentages just present,  how
many per cents of viruses used in this analysis the products could detect.
A  lot  of  work  was  used  to exclude non-viruses,  droppers  and  first
generation viruses from the test set. However, we might have done mistakes
and  therefore  it  is  possible to have non-viruses included  although  I
believe  that  there are only few such mistakes. Also it should  be  noted
that we did not try to check whether a product can reliably detect a virus
e.g. we did not count cases, where a product did not detect all replicates
of  a same virus. In addition the results reflect only one time period and
current  detection results could be different. It is also  possible,  that
there  were  some  faults  in scanning methods,  which  could  affect  the
results.  Because  of  the mentioned drawbacks the results  give  only  an
overall impression of the performance of the tested products.

Regardless  of the drawbacks I believe, that there are some advantages  in
this  analysis. We succeeded to include memory resident, DOS,  Windows  NT
and Windows 95 versions of scanners in the analysis. One great achievement
is,  that boot sector virus detection analysis could be performed  in  the
Windows 95 environment. In addition the test bed included a large  set  of
files.  This  analysis has also advanced cross-references,  which  clearly
show, which viruses were detected by which product and by which name.

REFERENCES

Bontchev 1993  Bontchev  Vesselin, "Analysis and Maintenance  of  a  Clean
          Virus  Library", Available electronically via anonymous  ftp  as
          ftp://ftp.uni-hamburg.de/pub/virus/texts/viruses/virlib.zip
Bontchev 1994  Bontchev  Vesselin,  "An Analysis of  Antivirus  Scanners",
          18.11.1994,  Available  electronically  via  anonymous  ftp   as
          ftp://ftp.uni-hamburg.de/pub/virus/texts/tests/vtc/test-01.zip
Gryaznov 1994   Gryaznov  Dmitry,  "Simboot:  A  New  Tool   for   Testing
          Scanners",  An  article from the proceedings of the  eicar  1994
          conference held in London, England 23.-25.11 1994, hosted by S&S
          International Plc.
Helenius 1995   Helenius  Marko,  "Automatic  and  Controlled  Virus  Code
          Execution System",
          An  article  from  the proceedings of the eicar 1995  conference
          held   in   Zurich,   Switzerland  27.-29.11   1995.   Available
          electronically        via        anonymous        ftp         as
          ftp://ftp.cs.uta.fi/pub/vru/documents/automat.zip
Helenius 1996  Helenius Marko, "Problems with Analysing Computer Antivirus
          Software  and  Some  Possible Solutions", An  article  from  the
          proceedings of the eicar 1996 conference held in Lintz,  Austria
          17.-19.11.1996. To order the proceedings, please  contact  eicar
          at http://www.eicar.com
Wells 1997      Wells   Joe,   "PC   Viruses  in  the   Wild",   Available
          electronically        via        anonymous        ftp         as
          ftp://ftp.cs.uta.fi/pub/vru/wildlist/0797.zip


