# @(#)84        1.5  src/examples/type_mgr/readme, examples.src, os2dce21.dss, 960602a.1  5/12/96  10:13:44
#
#/********************************************************************
# COMPONENT_NAME:  examples.src
#
# FUNCTIONS: Instruction file for Type_Mgr sample application
#
# ORIGINS: 27
#
# (C) COPYRIGHT International Business Machines Corp. 1995
#  All Rights Reserved
#  Licensed Materials - Property of IBM
#
#  US Government Users Restricted Rights - Use, duplication or
#  disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
#********************************************************************/

            INTRODUCTION

This file is an overview and user guide for the type manager example
program. This program implements a simple client-server distributed
application, along with a management (administration) application.

The actual application RPC operations implemented are trivial;
the intention of the example is to demonstrate particular techniques
that can be abstracted to production applications.

This document is arranged as follows:

Features - highlights what we think are the items of interest in the example.
Program Descriptions - of the server, client (customer) and administration.
Instructions - on how to run the program.
Notes and what's missing - what we have left "as an exercise to the reader."



                                FEATURES

Type Manager
------------
Defines two types, creatively named TYPE1 and TYPE2, for the objects
managed by the server.  Each object managed by the server is
associated with one of the types, and when a call to act on one of the
objects arrives at the customer interface, the server's runtime will
dispatch the call to the corresponding manager routine based on the
type of the object.

See: cust.idl, server.[ch], server_type_[12]_ops.c.


Administration Interface
------------------------
A separate interface is provided for administrative RPCs to manage
the server.

See tm_admin.*, server_admin_ops.c, server_setup_admin.c.


Object Oriented Namespace
-------------------------
Following the guidelines in Part 1 of the Application Developers
Guide, each object occupies its own name service entry.  In the entry
is the object uuid and the binding information for the server managing
the object.


Binding Handles
---------------
One of the most frequently occuring problems we have encountered is a
lack of robustness in application clients' attempts to get a valid
binding handle for a server.  All sorts of problems crop up, such as no
route from the client to the server for a particular server network
address, stale information in the CDS cache, and stale endpoints left
in the endpoint map by crashed servers.

An rpc binding utility is provided that takes a fast-path approach to
acquiring a server binding, and then, if this fails, starts back at
the specified name service entry and works its way through each
compatible binding, exhaustively searching the endpoint map of the
server specified by the binding. The utility utilizes a context to
keep track of where in this process the client is.  That is, the
client application aquires a context, then a compatible binding, and
then makes the rpc call.  If the call fails, the client can make a
subsequent call to the utility, and by the context is able to take up
wherever it left off in its quest through name service entries and
endpoint maps.  One parameter of the utility routine is a function to
sort the binding vector retrieved from the name service lookup call.
Thus a routine to sort the binding vector, say by prioritizing a
network shared by the client and server, can be passed into the
utility (see Local Host Addresses for a tip on how to get started on
this).

See: bind_util.[ch]. (these files are in the "common" sibling directory)


Local Host Addresses
--------------------
Since each object is associated with and managed by only one server,
at startup time a server must ensure no other running server is already
managing its objects.  Most of this is pretty rote, but one neat utility
that is provided by this example retrieves all network interface address
information for the local host.  This is used to ensure a particular server
is always started on the same host (we are assuming the objects managed
by the server are associated with some particular host, for example,
printers and a print server).

See util.c. (this file is in the "common" sibling directory)


Reentrant Database
------------------
A database of the objects managed by the server and their types is
provided.  Calls to the database api automatically hold a mutex making
any changes.  For inquiries to the database, a set of calls are
provided to acquire a context, step through each entry, and free the
context.  The purpose of the context is to allow concurrent accesses
to the database without having to lock individual records, but without
the threat of having the current entry deleted while being pointed at.
An example of a idempotent, reentrant initialization process is also
provided.

See server_object_db.[ch].


Threads Signal Handler and Exception Handlers
---------------------------------------------

Using the threads facility, a DCE flavour of signal handling is
implemented.  The main thread of the server is set up so that if any
of the specified signals is received by the process, the main thread
will be cancelled.  Since an exception handler is in place, the
thread will be alerted (via an exception) when an expected signal is
received.  This allows the server to run the necessary shutdown
routines when expected signals (ie SIGSTP, SIGINT) are received.

All example programs have Exception Handlers in place around important
code areas (ie where RPC's exist, where the server pends).

See server.c, tm_admin_calls.c, & client_calls.c


                          PROGRAM DESCRIPTIONS

Server
------
This server offers a customer interface managing objects of two types.
For each type, the server contains a set of three manager routines
corresponding to the calls specified in the interface definition and
referenced through a manager entry point vector (mepv). As mentioned
above the objects the server manages are trivial: they are strings
returned through an RPC argument that indicate which operation (RPC)
in which manager (type) was executed.  The server registers its
interface with the RPC runtime twice, once for each mepv/type pair.
Each time the server registers the interface, it associates the
interface specification with an mepv and a UUID representing the type.

In turn, each object the server manages is represented by an object
UUID, and this object UUID is associated in the server RPC runtime
with one of the type UUIDs.  When a client request is received,
runtime will inspect the binding handle, and based on the object UUID
in the binding handle, the mepv registered for the associated type
will be referenced to invoke the proper manager routine.  Thus, the
server can provide the same interface to its clients, but provide
different backend processing of their RPC requests based on the needs
or constraints imposed by the particular type of object being managed.

The server exports an entry into the namespace for each object the
server manages.  The entry name is based on the object name and
contains a single object UUID and partial binding information (no
endpoint) for the server.  To enable dynamic binding, the server
registers each object in the endpoint map (rpcd) on its local host.
Each endpoint map element for the server contains the object uuid
representing the object, the interface specification for the customer
interface and complete binding information (including the endpoint the
server is listening on).  Object names must be unique within a cell.
The namespace directory path that the server exports object entries to
is specified in type_mgr.h.

The server provides another interface for administration; used to
add/delete/list the objects that the server offers to customers. It
also has a command to stop the server.  The server exports an entry
into the namespace based on its name and registers an endpoint for the
administration interface containing the object uuid representing the
server instance.  Server names must be unique within a cell.  The
namespace directory path that the server exports its server entry to is
specified in type_mgr.h.  The server also adds itself as a member of a
server RPC-group.  This group is used by the administration program to
list the servers in the cell.  The group name is specified in
type_mgr.h.

When the server is shutdown by the administration program, it
automatically removes its endpoints from the endpoint map.  It does
not remove either the server entry or the object entries from the
namespace as the name service is intended for persistent informantion.

Since the objects (and their uuids and namespace entries) persist
between server invocations, the server must maintain some record of
the objects it manages and their types that can be found at startup.
We chose to maintain a file based persistent database on the server's
host machine.  The database records are tuples of the format
<object_name><type_name>.  The filesystem directory path that the
server writes its persistent database in is specified in type_mgr.h.

Client
------
The client makes a fixed series of RPCs to the server process each
time it is invoked.  When the user specifies to the client the
namespace entry it wishes to import in order to get a binding handle,
it has effectively choosen the manager endpoint vector that will be
used. (See "Server" above for details).  The client writes the string
returned through the RPC which indicates the particular manager
operation invoked by the call.

TM_admin
-------
The tm_admin( administration application) is an application that uses
RPCs to control the server process, and manage the namespace.  It
accesses the server by name (implicitly by namespace entry).  The RPCs
include adding, deleting and listing objects managed by a (bound to)
server, and listing the servers in the cell.



                                INSTRUCTIONS


Refer to the main Examples Readme file in opt\dcelocal\examples
for information on how to build these examples.


This application uses the namespace directory  /.:/subsys/examples/type_mgr.
Using cdscp (this may need to be done by the cell administrator):

    cdscp create directory /.:/subsys/examples
    cdscp create directory /.:/subsys/examples/type_mgr
    cdscp create directory /.:/subsys/examples/type_mgr/servers

Next, the server process must be brought up.  This is done by specifying
the following command:

    server -n <server_name>

    -n :  This is the base of the namespace entry that will be created by
          the server process to explicitly identify itself (ie to the
          administration application).  This is also the name that will
          prefix all server output.  The full namespace entry for the server
          will consist of the name given here, concatenated
          with "@<host>", where host represents the local machine host
          name.  To see which servers exist in the namespace, type the
          command "tm_admin -v".

    -z :  Turn on tracing of the server process.  Output goes to stdout.

Next, we need to run an administration program.  When first started the
server manages no objects.  To add an object:

    tm_admin  -a -o <object_name> -t <object_type> -e <server_name>

You may choose the <object_name> so long as it does not conflict with
another entry in the /.:/subsys/examples/type_mgr directory.  The
<object_type> must be one of the two predefined type names, TYPE1 or
TYPE2.

Other tm_admin options:

    -a :  request to add a object. You should use this flag with -o and
          -t option.

    -d :  request to delete a object. You should use this flag with
          -o option.

    -e :  This is the server name that explicitly identifies the
          server process.  A list of available servers can be obtained
          by typing "tm_admin -v".

    -l :  request to list all of the objects that the server holds.

    -n :  This is the name that will prefix all manager output.  The default
          is "TM_ADMIN".  This is optional, and is unimportant.

    -o :  specifies the object name.

    -s :  request to stop the server.

    -t :  specifies the type of the object.

    -z :  Turn on tracing of the tm_admin process.  Output goes to stdout.

You can invoke the tm_admin program as often as you like to manipulate the
objects made available by a server.

Next, we need to run a client against the server.  This is done with the
following command line:

    client -o <object_name>

    -o :  This is the object name that the client wants to access.
          The user verifies that the proper manager was selected by
          examining the output that the client application prints
          after each RPC has completed.

    -n :  This is the name that will prefix all client output.  The default
          is "CLIENT".  This is optional, and is unimportant.

    -z :  Turn on tracing of the client process.  Output goes to stdout.

You can invoke the client program as often as you like.

When you are ready to bring the server process down, you must invoke the
administration program.  This is done as follow:

    tm_admin -s -e <server_name>


Cleaning Up
-----------
After you are done using these example programs, you will want to
remove the object and server entries that have been exported to the
namespace.  This is done as follows:

    1)  Using the following command, you can see the entries that the
        server has created in the namespace:

        /usr/bin/cdsli -Ro /.:/subsys/examples/type_mgr

    2)  The following command will remove the /.:/subsys/examples/type_mgr
        directory created for these tests, as well as all of its
        subordinate objects.

        /usr/bin/cdsdel -R /.:/subsys/examples/type_mgr

        You will be asked for confirmation.  Type "y".

    3)  Issuing the cdsli command given above, you can see that the
        namespace has been cleaned up of all entries this example
        has created.  The error will be:

        Requested entry does not exist (dce / cds)

    4)  Lastly, it is important that you remove the servers local
        database.  Do this by typing the following while in the subdirectory
        where server.exe was run.

        del <server_name>.db

        Where <server_name> is the name that server.exe was started with.


                           NOTES AND WHAT'S MISSING

Security
--------
How the ACLs are set on the /.:/subsys/examples and subordinate
directories will determine if/what dce_login context is needed.

Authentication and authorization would be interesting functionality to add
to the example.


Type Names
----------
To RPC runtime the types are represented by a UUID.  We associated a
name to the types so we could reference the internal type UUID.


Error handling
--------------
The acf comm_status and  error_status attributes override any exception
handling mechanism placed around an RPC call. You can use both of them
as a return value of the rpc operation or as the value of a parameter.
This example uses them as return values of the RPC call.

You should be careful how you design the error handling.  There are three
cases of errors in this example.

1. The error is detected by the application program and is related to
   the application itself.  We started working on an array of messages
   which would be indexed by return status codes (a very simple in-core
   message catalog).  Disjoint pieces of this can be found in type_mgr.h
   and tm_admin_calls.c.  What we'll probably do next is put all the
   status codes (as #defines) in type_mgr.h and the message array in
   type_mgr.c - to be linked by all components of the application.

2. The error is detected by server manager operation and is related to
   the DCE system. If possible, the server should try to return
   the error code back to the client so the client can attempt to deal
   with the situation. If the error code is left intact, the client can
   call dce_error_inq_text() to give the user more information.

3. The error is detected during an application RPC, like
   rpc_s_comm_failure. These error messages should be output using
   dce_error_inq_text().  See tm_admin_calls.c for an example.

In this example program, all RPC's will receive [comm_status] errors
as results to the call.  Since the acf files don't specify any use of
the fault_status attributes, they are handled by the exception handling
mechanism around the RPC calls.


Header Filenames
----------------
In this program, the include file that the idl compiler creates has
the name <xxx>_if.h.  This is not the default header file name that is
normally generated by idl, based on the interface name.  The "_if" has
been added to more easily identify the output of the idl compiler
from the other include files.


Object Database
---------------

A couple of things to try here.  The current version of the
object_db_entry_delete call simply marks the in-core object table
entry as "deleted" and then removes the entry from the persistent
database.  A "helper" thread could be added that wakes up every
several minutes and goes through the in-core table, removing the table
elements.  The call to remove the object from the persistent database
could be executed by this helper thread, instead of the admin call
waiting for the file manipulation to be completed.  The thread would
need to be awakened during the server shutdown process.  Next, add a
log so if the server crashes, the database can be brought in sync with
the namespace.

After we had finished this component, it dawned on us that we might
have used a namespace RPC-group for each type.  The group would
contain the namespace entries for all of the objects of that type
offered by the server.  If group names were based on the type name, and
the groups for a particular server were located in a well-known name
service directory (the last component of which would be based on the
server name), then the server could find all necessary information at
startup.  We left the original database implementation intact because
it offers some potentially useful solutions to concurrent programming
problems (and because it took a lot of time to write).


Conditionally compiling in Debugging
------------------------------------

If you define the variable _TM_DEBUG to the c compiler, some utility
routines will be compiled in that print out binding handle information in
string format.  This is useful to see the order in which a vector of
binding information is being processed.

Good Luck!

