Go to the previous, next section.
ILU can be used in either the single-threaded or the multi-threaded programming style. This chapter describes how.
The issue of threadedness appears at two levels: within a program instance, and again for an entire distributed system. We will first discuss the program level, and then the system level.
ILU factors its runtime support into a common kernel and several independent language-specific veneers; you will see this structure when you try to do certain non-vanilla things. The interface to the runtime kernel is `ILUHOME/include/iluxport.h'.
Some programming languages are defined to support multiple threads of control. Modula-3 is an example. Other language definitions are single-threaded, or are silent on this issue. Some of these, such as C and C++, can be used to write mutli-threaded programs with the use of certain libraries, coding practices, and compilation switches. ILU can be used in multi-threaded programs in both inherently multi-threaded languages and some of those where multi-threaded is an option.
ILU's runtimes for both Franz Common Lisp and Modula-3 support multi-threading; programmers do not need to do anything special in these languages.
ILU's runtimes for C and C++ support both single-threaded and multi-threaded programming; they assumes single-threading by default, and can be switched to multi-threading by a procedure call during initialization (described below).
ILU's runtime for Python provides only single-threaded operation.
ILU's runtime kernel defaults to supporting single-threaded operation, and can be switched to multi-theading by procedure calls during initialization. It is the responsibility of the language runtime to make these calls, if the language is inherently multi-threaded, or to offer the option of making these calls, if the language is optionally multi-threaded. A later subsection describes how to switch the kernel.
By default, the ANSI C language support in ILU is non-threaded. However, support for both Solaris-2 and POSIX threads is included in the 2.0 release.
To switch the ILU ANSI C runtime from its default assumption of single-threadedness to multi-threaded operation, place the macro ILU_C_ENABLE_THREADS
before any calls to ILU_C_Run
, ILU_C_InitializeServer
, or anything that relies on a default ilu_Server
existing. This will switch both the ILU kernel and the C runtime to multi-threaded operation. Note that the ability to use either POSIX or Solaris-2 threads must have been enabled during system configuration.
In some thread systems, it is important for the "main" thread not to exit before the program is finished executing. To provide for this, your C program should call ILU_C_FINISH_MAIN_THREAD(val)
instead of simply returning from main()
. This routine will block if necessary until it is safe for the thread to return, and will return the value val.
To switch the ILU C++ runtime from its default assumption of single-threadedness to multi-threaded operation, call iluServer::SetFork
(described in `ILUHOME/include/ilu.hh') before calling iluServer::Run
, iluServer::Stoppable_Run
, iluServer::iluServer
, or anything that relies on a default iluServer
existing. iluServer::SetFork
makes a feeble attempt to detect being called too late, returning a logical value indicating whether an error was detected (when an error is detected, the switch is not made). This detection is not reliable -- the caller should take responsibility for getting this right.
Pass to iluServer::SetFork
a procedure for forking a new thread. This forking procedure is given two arguments: a procedure of one pointer (void *
) argument and a pointer value; the forked thread should invoke that procedure on that value, terminating when the procedure returns.
Before calling iluServer::SetFork
, you must switch the kernel to multi-threaded operation by calling ilu_SetWaitTech
, ilu_SetMainLoop
, and ilu_SetLockTech
as mentioned later (see section Switching the Runtime Kernel to Multi-Threaded Operation). ILU's C++ runtime takes care of forking the thread to call ilu_OtherNewConnection
; you should not call ilu_NewConnectionGetterForked
.
The kernel assumes single-threaded operation, and can be switched to multi-threading. To do so, four procedure calls must be made early in the initialization sequence, on ilu_SetWaitTech
, ilu_SetMainLoop
, ilu_SetLockTech
, and ilu_NewConnectionGetterForked
. See `iluxport.h' for details, and the Modula-3 (NOT) and Common Lisp language-specific veneers (found in `ILUSRC/runtime/m3/' and `ILUSRC/runtime/lisp/') for usage examples.
Users of ILU in single-threaded programs typically need to worry about only one thing: the main loop. To animate ILU server modules, a single-threaded program needs to be running the ILU main loop. This can be done, e.g., by calling ILU_C_Run()
in C or iluServer::Run
in C++. ILU also runs its main loop while waiting for I/O involved in RPC (so that incoming calls may be serviced while waiting for a reply to an outgoing call; for more on this, see the section on "Threadedness in Distributed Systems").
The problem is, many other subsystems also have or need their own main loop. Windowing toolkits are a prime example. When a programmer wants to create a single-threaded program that uses both ILU and another main looped subsystem, one main loop must be made to serve both (or all) subsystems. From ILU's point of view, there are two approaches doing this: (1) use ILU's default main loop, or (2) use some external (to ILU) main loop (this might be the main loop of some other subsystem, or a main loop synthesized specifically for the program at hand). ILU supports both approaches. Actually, ILU's runtime kernel supports both approaches. Currently no language veneers mention it. This is, in part, because it has no interaction with the jobs of the language veneers -- application code can call this part of the kernel directly (from any language that supports calling C code).
ILU needs a main loop that repeatedly waits for I/O being enabled on file descriptors (a UNIX term) and/or certain times arriving, and invokes given procedures when the awaited events happen. (Receipt of certain UNIX signals should probably be added to the kinds of things that can be awaited.) The main loop can be recursively invoked by these given procedures, and thus particular instances of the main loop can be caused to terminate as soon as the currently executing given procedure returns. This functionality can be accessed via the procedures ilu_RunMainLoop
through ilu_UnsetAlarm
in `iluxport.h'; these procedures are shims that call the actual procedures of whatever main loop is really being used.
In this approach, ILU's default main loop is made to serve the needs of both ILU and the other main-loop-using parts of the program. When the other main-loop-using parts of the program need to wait for I/O being enabled or a particular time arriving, you arrange to call the appropriate registration procedures (via, e.g., ilu_RegisterInputSource
, ilu_RegisterOutputSource
, ilu_SetAlarm
) of the ILU main loop.
In this approach, you use an external (to ILU) main loop to serve the needs of ILU (as well as other parts of your program). This involves getting ILU to reveal to you its needs for waiting on I/O and time passage, and your arranging to satisfy these needs using the services of the external main loop. You do this by calling ilu_SetMainLoop
early in the initialization sequence, passing a ilu_MainLoop
metaobject of your creation. ILU reveals its needs to you by calls on the methods of this metaobject, and you satisfy them in your implementations of these methods.
Note that an ilu_MainLoop
is responsible for managing multiple alarms. Some external main loops may directly support only one alarm. Later in `iluxport.h' you will find a general alarm multiplexing facility, which may come in handy in such situations.
See the files in `ILUSRC/runtime/mainloop/' for several examples of this approach (for the X Window System's various toolkits, like Motif, Xaw, XView, and Tk).
Both of the above approaches rely on there being a certain amount of harmony between the functional requirements made by some main-looped subsystems and the functional capabilities offered by others. It also relies on the subsystems whose "normal" main loops are not used being open enough that you can determine their main loop needs. The conditions cannot be guaranteed in general. We've tried to minimize the main loop requirements of ILU, and maximize its openness.
We know of an example where neither of the above approaches is workable, and have a solution that may be of interest. See `ILUSRC/etc/xview/' for the (untested) code.
The problem is with the Xview toolkit (for the X Window System). Its main loop cannot be recursively invoked (a requirement of ILU), and the Xview toolkit is not open enough to enable use of any other main loop.
Our solution is to use Xview's main loop as the top level main loop, letting ILU use its own main loop when waiting on RPC I/O. Like the external main loop approach, this requires getting ILU to reveal its needs for waiting on I/O and time; unlike the external main loop approach, this requires not calling ilu_SetMainLoop
. Instead of calling ilu_SetMainLoop
, you call ilu_AddRegisterersToDefault
, which causes ILU's default main loop to reveal ILU's needs to you -- in addition to doing everything the default main loop normally does. (Actually, the multiple alarms of ILU have been multiplexed into one here for your convenience.) You register these needs with the Xview main loop, and run it at the top level.
This solution is not as good as we'd like; it does not provide a truly integrated main loop. In particular, any I/O handler registered through ILU's generic procedures (ilu_RegisterInputSource
, ilu_RegisterOutputSource
) may be called spurriously: due to lack of coordination, both loops may decide a call is in order (when, of course, only one call is in order). As of release 2.0, ILU's own I/O handlers are prepared for spurrious calls. Application programmers are responsible, when they use ilu_AddRegisterersToDefault
, for making sure their I/O handlers that are registered through ILU's generic procedures are prepared for spurrious calls.
In a distributed system of interacting program instances, you can (in principle, even if not (easily) in practice) trace a thread of control across remote procedure calls. Thus a distributed system, when viewed as a whole, can be seen to be programmed in either a single-threaded or multi-threaded style. ILU aims to minimize the consequences of the choice between in-memory and RPC binding, and this requires things not usually offered by other RPC systems. Some of these things are required by both the single-threaded and multi-threaded styles of programming distributed systems, for related but not quite identical reasons.
Forget RPC for a moment, and consider a single-threaded program instance. Method m1
of object o1
(we'll write this as o1.m1
) may call o2.m2
, which may call o3.m3
, which may in turn call o1.m1
again, which could then call o3.m4
, and then everything could return (in LIFO order, of course). Late in this scenario, the call stack of the one thread includes two activations of the very same method of the same object (o1.m1
), and another two activations of different methods of a common object (o3
). All this is irrespective of module boundaries.
We want to be able to do the same thing in a distributed setting, where, e.g., each true object is in a different program instance. This means that while the ILU runtime is waiting for the reply of an RPC, it must be willing to service incoming calls. This is why ILU requires a recursive main loop in single-threaded programs.
In fact, one rarely wants single-threaded distributed systems. Indeed, the opportunities for concurrency are one of the main attractions of distributed systems. In particular, people often try to build multi-threaded distributed systems out of single-threaded program instances. While we hope this confused approach will fade as multi-threading support becomes more widespread, we recognize that it is currently an important customer requirement. Making single-threaded ILU willing to recursively invoke its main loop also makes single-threaded program instances more useful in a multi-threaded distributed system (but what you really want are multi-threaded program instances).
Threading is also an issue in RPC protocols. Some allow at most one outstanding call per connection. When using one of these, ILU is willing to use multiple parallel RPC connections, because they're needed to make nested calls on the same server.
Go to the previous, next section.