This is the mail archive of the cygwin-developers@cygwin.com mailing list for the Cygwin project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: System-wide mutexes and pthreads


WARNING: long message  . . .

"Robert Collins" <robert.collins@syncretize.net> wrote:
> 3) The cygserver needs to be robust. It should never need to wait on a
> user owned mutex or event, and if it absolutely has to wait on such a
> thing should always use a timed wait call.

I just read this again and took it in this time. There are some situations
in the current cygserver / shm code where the cygserver and the clients
would need a shared (i.e. system-wide) mutex; for example (the only
example?) of this is updating the "control" shared memory segment that
stores the shmid_ds structure for each shm segment. The clients currently
read this directly and some updates to it (performed by cygserver) are of
multiple bytes (e.g. updating both the shm_atime and shm_lpid fields at
shmat(2)).

Given your (correct) comments about not having mutexes common to both server
and clients, one solution to this issue is simply to get rid of this control
segment. The clients would then need to request the shmid_ds structure from
the server whenever they wanted it but that only happens for shmget(2) and
shmctl(2) w/ IPC_STAT and IPC_SET, and for all but the IPC_STAT case the
client has to make a cygserver request on the current design anyway. I would
*hope* that IPC_STAT isn't a frequently performed operation in most clients
(I can't remember using it other than for the usual paranoid checks during
initialisation in production code).

This would have the advantage that for shmget(2), cygserver would simply
create the segment (if required) and return just the shmid to the client,
i.e. the client wouldn't at that point have a handle to the file mapping.
Then shmat(2) would need to contact cygserver to get the file map handle
duplicated into the client process (as well as allowing cygserver to keep
track of who is attached to a given segment). On the current design the
handle is duplicated at shmget(2) and shmat(2) only contacts the server to
increment the segment's shm_nattch count etc.

The data structures used by the client-side library (i.e. the dll) could be
simplified as it would only require a list of attached segments, not the
current two level list of shmnode objects and shmattach objects. So on
shmat(2), the client just gets the file map handle duplicated by server,
maps the view into its address space, and adds an entry onto the attach
list, keyed by the segment's base address. Even if a client attached the
same segment more than once, which I would have thought was uncommon, it
could use exactly the same sequence.

nb. I'm not sure about what happens if the client forks halfway through the
attach code being run, since the attach list would be inconsistent, so the
fork code would need to lock the attach list mutex before it could go ahead.
There's going to have to be some mutex handling in the fork implementation
anyhow if any of the dll code uses mutexes. Also note that this will be an
issue with the current design as well (once it gets mutex'ed that is).

The only place that such an attach list would then be needed is the
fixup-on-fork code, which has to contact the server to tell it to update the
server's notion of which process is attached to which segments (i.e. a
different interface to that used by shmat(2)) .

Conceivably the client could hold no state at all and request everything
from the server, since the server needs to keep the an attach list for each
client in any case to cleanup on client process exit. The difficulty then
would be the fixup-on-fork code. This would need to send a single message to
the server, which would duplicate the parent's attach list and send it back
to the child (it could also immediately set the child's server-side attach
list to exactly that list since it knows what the child is about to do with
the list). The child then runs down the list, calling MapViewOfFile for each
attached segment. The only issue here is that the parent must neither have
exited nor attached/detached from any segments between the fork(2) and the
fixup-on-fork code being run in the child; but I think that the parent is
suspended during the copy. Yes?

So: two proposals: the first is to remove the "control" shared memory
segment and have the clients make requests for all information. The second
is to strip all the state out of the client-side code, which would make it
thread-safe too :-)

The first seems like a good idea to me, the second is more fuzzy. Has anyone
any opinions on this? Well, anyone who's read this far anyway :-)

Cheers,

// Conrad




Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]