SPRAAK
 All Data Structures Namespaces Files Functions Variables Typedefs Enumerations Enumerator Groups Pages
Data Structures | Typedefs | Enumerations | Functions
API_HighLvl::lib::nn
+ Collaboration diagram for API_HighLvl::lib::nn:

Data Structures

struct  SprNN
 

Typedefs

typedef struct SprNNStreamO_t SprNNStreamO
 
typedef struct SprNNStreamC_t SprNNStreamC
 close an open stream More...
 
typedef struct SprNNIOEl_t SprNNIOEl
 
typedef struct SprNNI_t SprNNI
 NN interface. More...
 
typedef void *(* SprNNDataIn )(void *restrict layer_val, void *restrict src, int Nel)
 function point to handle the input data More...
 
typedef void *(* SprNNDataOut )(void *restrict dst, const void *restrict layer_val, int Nel)
 function point to handle the output data More...
 

Enumerations

enum  {
  SPR_NN_EVAL, SPR_NN_TRAIN, SPR_NN_NOVECTOR, SPR_NN_NOSCALAR,
  SPR_NN_PARALLEL, SPR_NN_ASYNC, SPR_NN_TRAIN2, SPR_NN_NOWARN
}
 

Functions

int spr_nn_dump (SprStream *fd, const SprNN *restrict nn)
 
SprNNspr_nn_free (SprNN *nn)
 
SprNNspr_nn_init (SprStream *fd, const char *fname, int flags, SprVarlist *vars)
 
void * spr_nn_data_in_memcpy (void *restrict layer_val, void *restrict src, int Nel)
 a standard implementation to handle the input data More...
 
void * spr_nn_data_out_memcpy (void *restrict dst, const void *restrict layer_val, int Nel)
 a standard implementation to handle the output data More...
 
void * spr_nn_data_out_null (void *restrict dst, const void *restrict layer_val, int Nel)
 ignore the output data (flush the system) More...
 
int spr_nni_feed_input (SprNNI *restrict nni, void *restrict data, SprNNDataIn func_get)
 
int spr_nni_read_output (SprNNI *restrict nni, int block, void *restrict data, SprNNDataOut func_put)
 
SprNNIspr_nni_free (SprNNI *restrict nni)
 
SprNNIspr_nn_interface (SprNN *restrict nn, int flags, int MT)
 

Detailed Description

Typedef Documentation

typedef struct SprNNStreamO_t SprNNStreamO

open a file for streaming the data to the read functions or for writing back the trained parameters

typedef struct SprNNStreamC_t SprNNStreamC

close an open stream

typedef struct SprNNIOEl_t SprNNIOEl

connect and stream open/close elements are stored in one IO list

typedef struct SprNNI_t SprNNI

NN interface.

typedef void*(* SprNNDataIn)(void *restrict layer_val,void *restrict src,int Nel)

function point to handle the input data

typedef void*(* SprNNDataOut)(void *restrict dst,const void *restrict layer_val,int Nel)

function point to handle the output data

Enumeration Type Documentation

anonymous enum
Enumerator
SPR_NN_EVAL 

read the data with the purpose to evaluate the NN (can be combined with TRAIN)

SPR_NN_TRAIN 

read the data with the purpose to train the NN (can be combined with EVAL)

SPR_NN_NOVECTOR 

do not use vector operations (require scalar operations)

SPR_NN_NOSCALAR 

do not use scalar operations (require vector operations)

SPR_NN_PARALLEL 

process multiple frames in parallel (reduces strain on memory resulting in approx. 2x faster processing)

SPR_NN_ASYNC 

allow asynchronuous training updates, i.e. the parameters are updated while (instead of before) the gradients for the next batch are computed.

SPR_NN_TRAIN2 

estimate second order statistics as well (learning rate per parameter)

SPR_NN_NOWARN 

do not warn on slow/inefficient constructs

Function Documentation

int spr_nn_dump ( SprStream fd,
const SprNN *restrict  nn 
)
SprNN* spr_nn_free ( SprNN nn)
SprNN* spr_nn_init ( SprStream fd,
const char *  fname,
int  flags,
SprVarlist vars 
)

Read a configuration from an open stream fd, or from the file with name fname (if fd equals NULL), or using spr_ssp_next_line() using fd as 'line_array' (if the bit flag SPR_NN_SSP_READ is set). The configuration defined the topology (number of layers, connections, ...) and the sources (files, ...) from which the parameters must be read. Before anything can be done with the NN, an interface has to be opened. The flags flags (and optionally some extra flags in string format) can limit the type of interfaces (vector, scalar, eval, train, ...) that can be opened, which allows more agressive fine-tuning of the parameter storage. The argument vars can be used to pass existing variables to this routine, e.g. specifying the size of the NN. Use NULL to indicate that no exiting variables are to be passed to the NN initialisation.

Returns
a handler to the configuration on success and NULL on failure.
Note
the function can also be called to print information to fd concerning the NN config file format by specifying (-1) as flags flags
void* spr_nn_data_in_memcpy ( void *restrict  layer_val,
void *restrict  src,
int  Nel 
)

a standard implementation to handle the input data

Copy Nel floating point values from src to layer_val.

Returns
The updated src pointer (src+Nel*sizeof(SprNNFltS)) on success and NULL on error.
Parameters
layer_valdestination for the requested data (node values of some input layer)
srcthe source pointer, points to where in memory the input data can be read from
Nelnumber of elements requested
void* spr_nn_data_out_memcpy ( void *restrict  dst,
const void *restrict  layer_val,
int  Nel 
)

a standard implementation to handle the output data

Copy Nel floating point values from layer_val to dst.

Returns
The updated dst pointer (dst+Nel*sizeof(SprNNFltS)) on success and NULL on error.
Parameters
dstthe destination pointer, points to where in memory the output data must be written to
layer_valthe data (node values of some input layer) that must be written
Nelnumber of elements to write
void* spr_nn_data_out_null ( void *restrict  dst,
const void *restrict  layer_val,
int  Nel 
)

ignore the output data (flush the system)

Ignore the output data.

Returns
a valid non-NULL pointer (layer_val+Nel).
int spr_nni_feed_input ( SprNNI *restrict  nni,
void *restrict  data,
SprNNDataIn  func_get 
)

Copy the input data referred to be data to the input(s) of the NN and execute the 'fwd' routine in nni if a full set of frames is collected (SPR_NN_N_PARALLEL in the case of parallel evaluation of frames, 1 frame otherwise). The function func_get known how to handle the data pointer (which may be a memory pointer, a stream, ...). If data equals NULL, any pending non full set of frames is closed and processed by the 'fwd' routine in nni.

Returns
(-1) on failure (thread error or error with the input data), otherwise the number of frames send to the 'eval' routine in nni is returned (can be zero if parallel execution is requestioned and not enough frame have been collected).
Parameters
nniNN interface
datathe input data (func_get known how to get the data)
func_getfunction that get/copies the input data to the input layers
int spr_nni_read_output ( SprNNI *restrict  nni,
int  block,
void *restrict  data,
SprNNDataOut  func_put 
)

Check if output is available (block==0), and if no output is available either wait till output is available (block!=0) or until at least one input slot is available. If output is available, copy the next frame of the NN to wherever data points (memory, stream, ...) using func_put.

Returns
0 if no output data is available, 1 if output data is available (and copied), and (-1) on failure (thread error, copying the output data failed).
Parameters
nniNN interface
blockwait till output is available
datawhere to write the output data (func_put known how to write the data)
func_putfunction that put/copies the output layer data to wherever data points
SprNNI* spr_nni_free ( SprNNI *restrict  nni)

Free the resources (memory, threads) associated with the NN interface nni.

Parameters
nniNN interface
SprNNI* spr_nn_interface ( SprNN *restrict  nn,
int  flags,
int  MT 
)

Make an interface toward the NN for evaluation (bit flag SPR_NN_EVAL set in <flags) and/or for training (bit flag SPR_NN_TRAIN set in flags). Configure for MT worker threads (use MT==0 for single threaded use).

Other relevant bit flags are:

SPR_NN_NOVECTOR
do not use vector operations (require scalar operations)
SPR_NN_NOSCALAR
do not use scalar operations (require vector operations)
SPR_NN_PARALLEL
process multiple frames in parallel (reduces strain on memory resulting in approx. 2x faster processing)
SPR_NN_ASYNC
allow asynchronuous training updates, i.e. the parameters are updated while (instead of before) the gradients for the next batch are computed
SPR_NN_NOWARN
do not warn on slow/inefficient constructs
Returns
the interface of NULL on error.