cyclum.models package

Submodules

cyclum.models.ae module

Main module.

class cyclum.models.ae.AutoEncoder(input_width=None, encoder_depth=2, encoder_width=50, n_circular_unit=1, n_logistic_unit=0, n_linear_unit=0, n_linear_bypass=0, dropout_rate=0.0, nonlinear_reg=0.0001, linear_reg=0.0001, filepath=None)[source]

Bases: cyclum.models.ae.BaseAutoEncoder

A Cyclum style autoencoder

Parameters
  • input_width (Optional[int]) – width of input, i.e., number of genes

  • encoder_depth (int) – depth of encoder, i.e., number of hidden layers

  • encoder_width (Union[int, List[int]]) –

    width of encoder, one of the following:

    • An integer stands for number of nodes per layer. All hidden layers will have the same number of nodes.

    • A list, whose length is equal to encoder_depth, of integers stand for numbers of nodes of the layers.

  • n_circular_unit (int) – 0 or 1, number of circular unit; may add support for 1+ in the future.

  • n_logistic_unit (int) – number of logistic (tanh) unit which runs on the circular embedding. Under testing.

  • n_linear_unit (int) – number of linear unit which runs on the circular embedding. Under testing.

  • n_linear_bypass (int) – number of linear components.

  • dropout_rate (float) – rate for dropout.

  • nonlinear_reg (float) – strength of regularization on the nonlinear encoder.

  • linear_reg (float) – strength of regularization on the linear encoder.

  • filepath (Optional[str]) – filepath of stored model. If specified, all other parameters are omitted.

get_weight()[source]

Get the weight of the transform, where the last two dimensions are for the sinusoidal unit

Returns

a matrix

pre_train(data, n_linear_bypass, epochs=100, verbose=10, rate=0.0001)[source]

Train the network with PCA. May save some training time. Only applicable to circular with linear bypass.

Parameters
  • data – data used

  • n_linear_bypass (int) – number of linear bypasses, must be the same as the one specified during the init.

  • epochs (int) – training epochs

  • verbose (int) – per how many epochs does it report the loss, time consumption, etc.

  • rate (float) – training rate

Returns

history of loss

predict_linear_bypass(data)[source]

Predict the linear bypass loadings.

Parameters

data – data to be used for training

Returns

the circular pseudotime

predict_pseudotime(data)[source]

Predict the circular pseudotime

Parameters

data – data to be used for training

Returns

the circular pseudotime

train(data, batch_size=None, epochs=100, verbose=10, rate=0.0001)[source]

Train the model. It will not reset the weights each time so it can be called iteratively.

Parameters
  • data – data used for training

  • batch_size (Optional[int]) – batch size for training, if unspecified default to 32 as is set by keras

  • epochs (int) – number of epochs in training

  • verbose (int) – per how many epochs does it report the loss, time consumption, etc.

  • rate (float) – training rate

Returns

history of loss

class cyclum.models.ae.BaseAutoEncoder[source]

Bases: object

class MyCallback(interval)[source]

Bases: keras.callbacks.Callback

Call back function for

Parameters

interval – report loss, time, etc. per interval epochs

on_epoch_end(batch, logs=None)[source]
on_train_begin(logs=None)[source]
static circular_unit(name, comp=2)[source]

Create a circular unit

Parameters
  • name (str) – Name of this unit

  • comp (int) – components of phases. 2

Return type

Callable

Returns

function f: input tensor -> output tensor

static decoder(name, n)[source]

Create a dncoder

Parameters
  • name (str) –

  • n (int) – Output width

Return type

Callable

Returns

function f: input tensor -> output tensor

static encoder(name, size, reg, drop, act='tanh')[source]

Create a nonlinear encoder

Parameters
  • name (str) – Name of this unit

  • size (List[int]) – Size of each layer

  • reg (float) – regularization strength

  • drop (float) – dropout rate

  • act (Union[str, Callable]) – activation function

Return type

Callable

Returns

function f: input tensor -> output tensor

static linear_bypass(name, n, reg)[source]

Create a linear encoder

Parameters
  • name (str) –

  • n (int) –

  • reg (float) –

Return type

Callable

Returns

function f: input tensor -> output tensor

static linear_unit(name, n, trans=True, reg_scale=0.01, reg_trans=0.01)[source]

Create a logistic unit

Parameters
  • name (str) – Name of this unit

  • n (int) – Number of perceptrons

  • trans (bool) – Allow translation (i.e. b in Ax + b)

  • reg_scale (float) – regularization on scaling (i.e. A in Ax + b)

  • reg_trans (float) – regularization of translation

Return type

Callable

Returns

function f: input tensor -> output tensor

load(filepath)[source]

Load a BaseAutoEncoder object

Parameters

filepath

Returns

static logistic_unit(name, n, trans=True, reg_scale=0.01, reg_trans=0.01)[source]

Create a logistic unit

Parameters
  • name (str) – Name of this unit

  • n (int) – Number of perceptrons

  • trans (bool) – Allow translation (i.e. b in Ax + b)

  • reg_scale (float) – regularization on scaling (i.e. A in Ax + b)

  • reg_trans (float) – regularization of translation

Return type

Callable

Returns

function f: input tensor -> output tensor

save(filepath)[source]

Save a BaseAutoEncoder object

Parameters

filepath – h5 suffix is recommended, i.e., filename.h5

Returns

show_structure()[source]

Show the structure of the network

Returns

The graph for the structure

Module contents