simplenet package

Submodules

simplenet.simplenet module

simplenet.simplenet :: Define SimpleNet class and common functions.

class simplenet.simplenet.SimpleNet(hidden_layer_sizes: typing.Sequence[int], input_shape: typing.Tuple[int, int], output_shape: typing.Tuple[int, int], activation_function: typing.Callable[..., numpy.ndarray] = <function sigmoid>, output_activation: typing.Callable[..., numpy.ndarray] = <function sigmoid>, loss_function: typing.Callable[..., float] = <function neg_log_likelihood>, learning_rate: float = 1.0, dtype: str = 'float32', seed: int = None) → None[source]

Bases: object

Simple example of a multilayer perceptron.

__init__(hidden_layer_sizes: typing.Sequence[int], input_shape: typing.Tuple[int, int], output_shape: typing.Tuple[int, int], activation_function: typing.Callable[..., numpy.ndarray] = <function sigmoid>, output_activation: typing.Callable[..., numpy.ndarray] = <function sigmoid>, loss_function: typing.Callable[..., float] = <function neg_log_likelihood>, learning_rate: float = 1.0, dtype: str = 'float32', seed: int = None) → None[source]

Initialize the MPL.

Parameters:
  • hidden_layer_sizes – Number of neurons in each hidden layer
  • input_shape – Shape of inputs (m x n), use None for unknown m
  • output_shape – Shape of outputs (m x o), use None for unknown m
  • activation_function – Activation function for all layers prior to output
  • output_activation – Activation function for output layer
  • learning_rate – learning rate
  • dtype – Data type for floats (e.g. np.float32 vs np.float64)
  • seed – Optional random seed for consistent outputs (for debugging)
export_model(filename: str) → None[source]

Export the learned biases and weights to a file.

Saves each weight and bias in order with an index and a prefix of W or b to ensure it can be restored in the proper order.

Parameters:filename – Filename for the saved file.
import_model(filename: str) → None[source]

Import learned biases and weights from a file.

Parameters:filename – Name of file from which to import
learn(inputs: typing.Union[typing.Sequence[int], typing.Sequence[float], numpy.ndarray], targets: typing.Union[typing.Sequence[int], typing.Sequence[float], numpy.ndarray]) → None[source]

Perform a forward and backward pass, updating weights.

Parameters:
  • inputs – Array of input values
  • targets – Array of true outputs
predict(inputs: typing.Union[typing.Sequence[int], typing.Sequence[float], numpy.ndarray]) → numpy.ndarray[source]

Use existing weights to predict outputs for given inputs.

Note: this method does not update weights.

Parameters:inputs – Array of inputs for which to make predictions
Returns:Array of predictions
validate(inputs: numpy.ndarray, targets: numpy.ndarray, epsilon: float = 1e-07) → bool[source]

Use gradient checking to validate backpropagation.

This method uses a naive implementation of gradient checking to try to verify the analytic gradients.

Parameters:
  • inputs – Array of input values
  • targets – Array of true outputs
  • epsilon – Small value by which to perturb values for gradient checking
Returns:

Boolean reflecting whether or not the gradients seem to match

simplenet.simplenet.cross_entropy(y_hat: numpy.ndarray, targets: numpy.ndarray, der: bool = False) → float[source]

Calculate the categorical cross entropy loss.

Parameters:
  • y_hat – Array of predicted values from 0 to 1
  • targets – Array of true values
Returns:

Mean loss for the sample

simplenet.simplenet.neg_log_likelihood(y_hat: numpy.ndarray, targets: numpy.ndarray, der: bool = False) → float[source]

Calculate the negative log likelihood loss.

I believe this is also called the binary cross-entropy loss function.

Parameters:
  • y_hat – Array of predicted values from 0 to 1
  • targets – Array of true values
Returns:

Mean loss for the sample

simplenet.simplenet.relu(arr: numpy.ndarray, der: bool = False) → numpy.ndarray[source]

Calculate the relu activation function.

Parameters:
  • arr – Input array
  • der – Whether to calculate the derivative
Returns:

Array of outputs from 0 to maximum of the array in a given axis

simplenet.simplenet.sigmoid(arr: numpy.ndarray, der: bool = False) → <built-in function array>[source]

Calculate the sigmoid activation function.

\[\frac{1}{1 + e ^ {-x}}\]

Derivative:

\[x * (1 - x)\]
Parameters:arr – Input array of weighted sums
Returns:Array of outputs from 0 to 1
simplenet.simplenet.softmax(arr: numpy.ndarray) → numpy.ndarray[source]

Calculate the softmax activation function.

This equation uses a “stable softmax” that subtracts the maximum from the exponents, but which should not change the results.

\[\frac{e^x}{\sum_{} {e^x}}\]
Parameters:arr – Input array of weighted sums
Returns:Array of outputs from 0 to 1

Module contents

simplenet :: Simple multilayer perceptron in Python using numpy.