Yes. In the YumaPro SDK PTHREADS builds POSIX multi-threading is supported with each session running in its own thread.


This article describes the functional specification for POSIX Threads support for YumaPro netconfd-pro program.

The netconfd-pro supports single-thread as well as multi-thread processing. A single-thread netconfd-pro server has a single entry point, the netconfd_run() netconfd.c, and a single exit point. A multi-thread program has the same initial entry point, followed by many entry and exit points, which are run concurrently. The term "concurrency" refers to doing multiple tasks at the same time.


Single CPU Threads Processing 


The netconfd-pro has built-in support for concurrent programming by running multiple threads concurrently within a single program. A thread, also called a lightweight process, is a single sequential flow of programming operations, with a definite beginning and an end. A thread by itself is not a program because it cannot run on its own. Instead, it runs within a program. The following figure shows a program with 3 threads running under a single CPU:



Multi-CPU Threads Processing  


In a single-CPU machine, only one task can be executed at one time. In a multi-CPU machine, a few tasks can be executed simultaneously, either distributed among or time-slicing the CPUs.

The netconfd-pro server provides multitasking and multi threading support for better performance by making full use and optimize the usage of the computing resources. The netconfd-pro server uses co-operative multitasking system: each thread must voluntarily yield control to other thread. Each thread will access shared resources by applying lock, after the task is performed the thread will unlock the resource, that will let other thread to access the same resource.




Netconfd-pro Threads Processing


The following diagram illustrates the main and default netconfd-pro threads processing mechanism. By default the server allocates 4 threads that are responsible for various tasks.


The server allocates following threads:

  • Main thread:  this thread is primarily used to check for shutdown event and signal handling 
  • Background thread: background thread that constantly checks if Confirmed Commit was cancelled if any
  • Timer thread: single thread with primary responsibility to call timer service routine once a second
  • Connect thread: allocate connection thread to listen for session requests. Listens to socket in order to accept and create new client Session Thread and create File Descriptor for a new session
  • Session threads: receive thread/session handler. Check for I/O, explicit signal, or timeout.


The use of multiple threads is in some ways an ideal solution to the problem of asynchronous I/O, as well as asynchronous event handling in general, since it allows events to be dealt with asynchronously and any needed synchronization can be done explicitly (using mutexes).

The netconfd-pro server initialize multiple mutex locks. It defines basic critical section primitives support. The following general purpose mutex lock is used to prevent threads collusions:


ENTER/EXIT_CS: for extended computational sections (everything else) use these macros. These are general purpose in that they do not employ a spin lock and may be called recursively (by the owning thread). See heapchk.c for an example.


Lock Template

  • Type:  PTHREAD_MUTEX_RECURSIVE:  A thread attempting to relock this mutex without first unlocking it shall succeed in locking the mutex. The relocking deadlock which can occur with mutexes of type PTHREAD_MUTEX_NORMAL cannot occur with this type of mutex. Multiple locks of this mutex shall require the same number of unlocks to release the mutex before another thread can acquire the mutex. A thread attempting to unlock a mutex which another thread has locked shall return with an error. A thread attempting to unlock an unlocked mutex shall return with an error.

  • File:   thd.h

  • Macro:   ENTER/EXIT_CS

  • Template:    thd_recursive_cs_mutex_attr

  • Lock:    ENTER_CS

  • Unlock:   EXIT_CS

       

NOTES


It should be noted what operations and actions can be run concurrently and what can not. There are actions and operations that the netconfd-pro server supports and most of them can be run asynchronously; however, it primarily depends on whether the current operation affects the datastore or not. To exemplify, assume there are multiple client sessions that are in progress and all of them attempting to write to the datastore, in this case the netconfd-pro server, to avoid any datastore corruption, will serialise all the requests and apply them one after another instead of trying to apply some combination of all the requests that would inevitable result in datastore corruption.


The following operations can be run asynchronously:

  • GET requests
  • RPC requests
  • Multiple client sessions
Since the above requests does not rely on the current datastore state all the requests can be done concurrently from multiple sessions. Only an internal Critical Section locking will be applied.

However, the requests that affect and modify the datastore cannot be run at the same time:
  • EDIT-CONFIG
  • Any operation that modifies the datastore
  • ACTIONS
  • If the EDIT-CONFIG is in progress the GET request will be help up and wait until the edit request is done
  • If the EDIT-CONFIG is in progress any other operation that relies on the datastore state will be put oh hold