C++ Neural Networks Library
ActiveA neural network library written in C++ with analytical derivative propagation, Eigen-backed linear algebra, and Python bindings via pybind11.
Overview
c_nn is a C++ neural network library built around a composable Function class system. Each activation function carries its own exact analytical derivative, so backpropagation never does numerical differentiation or symbolic math at runtime - it just calls the precomputed gradient lambda. The library exposes a full Python API through pybind11.
How It Works
The core abstraction is Function: a thin wrapper around two std::function<double(double)> lambdas - one for the function itself and one for its derivative. Every activation (Sigmoid, ReLU, Sin, Log, Pol, etc.) sets both in its constructor.
Sigmoid::Sigmoid() : Function()
{
func res = [&](Scalar x) { return 1 / (1 + exp(-x)); };
func grad = [&](Scalar x) { return res(x) * (1 - res(x)); };
this->setFunc(res);
this->setGradient(grad);
}Composition operators (+, -, *, /, and exponent) build new Function objects that wire together both the value and the gradient according to arithmetic differentiation rules:
Function operator*(const Function &lhs, const Function &rhs)
{
func res = [&](Scalar x) { return lhs.call(x) * rhs.call(x); };
func grad = [&](Scalar x) {
return lhs.gradAt(x) * rhs.call(x) + lhs.call(x) * rhs.gradAt(x);
};
// ...
}This means composite activations like sin(x)^2 or sigmoid(x) * relu(x) get a correct analytical gradient automatically, with no extra work during training.
Compile-Time Derivative Calculation
The key insight is that derivatives are resolved when function objects are constructed, not when they are evaluated. By the time training starts, every layer already holds a lambda that computes the exact derivative of its activation. Backpropagation reduces to a sequence of lambda calls and Eigen matrix operations - no branching on function type, no numerical approximations.
For built-in activations the derivative is hardcoded analytically. For composed activations it is wired together once at construction via the product rule, chain rule, and quotient rule captured in the operator overloads above. Either way, by training time the gradient is just a stored std::function.
Eigen
All weight matrices, bias vectors, and activation outputs use Eigen types defined in cpp/include/cnn/Types.hpp:
typedef Eigen::MatrixXd Matrix;
typedef Eigen::RowVectorXd RowVector;
typedef Eigen::VectorXd ColVector;The forward pass in a Dense layer, the delta accumulation in backpropagation, and the weight update step all reduce to Eigen expressions. The function evaluation loops over matrices using #pragma omp parallel for to take advantage of OpenMP:
Matrix Function::call(Matrix &x) const
{
Matrix r(x.rows(), x.cols());
#pragma omp parallel for collapse(2)
for (size_t i = 0; i < x.rows(); ++i)
for (size_t j = 0; j < x.cols(); ++j)
r(i, j) = this->getFunc()(x(i, j));
return r;
}Weight update in Sequential::backPropagate is a straight Eigen rank-1 update:
output_layer->getW() -= (alpha * w_delta) / (hi - lo);
output_layer->getB() -= (alpha * b_delta) / (hi - lo);Python Bindings
The python/ directory wraps the entire library with pybind11, including Eigen ↔ NumPy array conversion via pybind11/eigen.h. The full Function algebra, all layer types, and the Sequential model are exposed.
import cnn
import numpy as np
sig = cnn.Sigmoid()
relu = cnn.ReLU()
model = cnn.Sequential(alpha=0.5)
model.add_layer(cnn.Input(2))
model.add_layer(cnn.Dense(4, relu))
model.add_layer(cnn.Dense(1, sig))
x_train = np.array([[0,0],[0,1],[1,0],[1,1]], dtype=np.float64)
y_train = np.array([[0],[1],[1],[0]], dtype=np.float64)
model.run(x_train, y_train, epochs=500, minibatch_size=2)
print(model.predict(x_train))Composed activations work identically in Python since the operators are exposed via py::self:
pol2 = cnn.Pol(2.0) # x^2, gradient = 2x
log = cnn.Log() # ln(x), gradient = 1/x
act = pol2 + log # x^2 + ln(x), gradient = 2x + 1/x - computed automaticallyC++ Usage
#include <cnn.hpp>
int main()
{
Sigmoid sig = Sigmoid();
ReLU relu = ReLU();
Sequential *model = new Sequential(/*alpha=*/0.01);
model->addLayer(new Input(2));
model->addLayer(new Dense(16, &relu));
model->addLayer(new Dense(8, &relu));
model->addLayer(new Dense(1, &sig));
// XOR dataset
Matrix x_train = Matrix::Zero(4, 2);
Matrix y_train = Matrix::Zero(4, 1);
x_train(1,1) = 1; x_train(2,0) = 1;
x_train(3,0) = 1; x_train(3,1) = 1;
y_train(1,0) = 1; y_train(2,0) = 1;
model->run(x_train, y_train, /*epochs=*/1000, /*minibatch_size=*/2);
std::cout << model->predict(x_train) << std::endl;
model->saveModel("xor.cnn");
// Load into a fresh model
Model *loaded = new Sequential(0.01);
loaded->loadModel("xor.cnn");
std::cout << loaded->predict(x_train) << std::endl;
return 0;
}Building
# Build everything (C++ library + Python module + tests)
mkdir build && cd build
cmake ..
make
# C++ only
cd cpp && mkdir build && cd build
cmake .. -DPYTHON_EXECUTABLE="/path/to/python" \
-DPYTHON_LIBRARY_DIR="/path/to/site-packages"
make && sudo make install
# Run tests
cd build/tests && ctest