The unix domain session is used to primarily to represent a client connection that can be managed on a separate thread.
#include <unix.h>
Inherits ost::Thread, and ost::UnixStream.
UnixSession (const char *pathname, int size=512, int pri=0, int stack=0)
Create a Unix domain socket that will be connected to a local server server and that will execute under it's own thread. UnixSession (UnixSocket &server, int size=512, int pri=0, int stack=0)
Create a Unix domain socket from a bound Unix domain server by accepting a pending connection from that server and execute a thread for the accepted connection. virtual ~UnixSession ()
Virtual destructor.
int waitConnection (timeout_t timeout=TIMEOUT_INF)
Normally called during the thread Initial() method by default, this will wait for the socket connection to complete when connecting to a remote socket. void initial (void)
The initial method is used to esablish a connection when delayed completion is used.
The Unix domain session is used to primarily to represent a client connection that can be managed on a separate thread.
The Unix domain session also supports a non-blocking connection scheme which prevents blocking during the constructor and moving the process of completing a connection into the thread that executes for the session.
Author:
Alex Pavloff [email protected] Threaded streamable unix domain socket with non-blocking constructor.
Create a Unix domain socket that will be connected to a local server server and that will execute under it's own thread.
Parameters:
pathname path to socket
size of streaming buffer.
pri execution priority relative to parent.
stack allocation needed on some platforms.
Create a Unix domain socket from a bound Unix domain server by accepting a pending connection from that server and execute a thread for the accepted connection.
Parameters:
server unix domain socket to accept a connection from.
size of streaming buffer.
pri execution priority relative to parent.
stack allocation needed on some platforms.
Virtual destructor.
The initial method is used to esablish a connection when delayed completion is used. This assures the constructor terminates without having to wait for a connection request to complete.
Reimplemented from ost::Thread.
Normally called during the thread Initial() method by default, this will wait for the socket connection to complete when connecting to a remote socket. One might wish to use setCompletion() to change the socket back to blocking I/O calls after the connection completes. To implement the session one must create a derived class which implements Run().
Returns:
0 if successful, -1 if timed out.
Parameters:
timeout to wait for completion in milliseconds.
Generated automatically by Doxygen for GNU CommonC++ from the source code.