Skip to content

Commit

Permalink
the first working version
Browse files Browse the repository at this point in the history
  • Loading branch information
dongyx committed Jul 3, 2023
0 parents commit 19bae5e
Show file tree
Hide file tree
Showing 32 changed files with 27,064 additions and 0 deletions.
7 changes: 7 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
/lnn
testenv
*.local
*.o
*.swp
*.tmp
*.nosync
30 changes: 30 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
BSD 3-Clause License

Copyright (c) DONG Yuxuan <https://www.dyx.name>

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:

1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
36 changes: 36 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
.PHONY: test install clean

CC=cc
INSTALL=install
prefix=/usr/local
bindir=$(prefix)/bin

all: lnn

lnn: main.o utils.o matrix.o neunet.o diffable.o
$(CC) -o $@ $^

main.o: main.c utils.h neunet.h diffable.h
$(CC) -c -o $@ $<

utils.o: utils.c utils.h
$(CC) -c -o $@ $<

matrix.o: matrix.c matrix.h
$(CC) -c -o $@ $<

neunet.o: neunet.c neunet.h matrix.h utils.h
$(CC) -c -o $@ $<

diffable.o: diffable.c diffable.h
$(CC) -c -o $@ $<

test: lnn
./runtest

install: lnn
$(INSTALL) -d $(bindir)
$(INSTALL) $< $(bindir)

clean:
rm -rf lnn testenv *.o *.tmp
103 changes: 103 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
LNN
===

LNN (Little Neural Network) is a command-line C program running, training, and testing feedforward neural networks, with the following features.

- Light weight, containing only a standalone executable;
- Serve as a Unix filter; Easy to work with other programs;
- Plain-text formats of models, input, output, and samples;
- Compact notations;
- Different activation functions for different layers;
- L2 regularization;
- Mini-batch training.

**Table of Contents**

- [Installation](#installation)
- [Getting Started](#getting-started)
- [Further Documentation](#further-documentation)

Installation
------------

It would be better to select a version from the [release page](https://github.com/dongyx/lnn/releases)
than downloading the working code,
unless you understand the status of the working code.
The latest release is always recommended.

$ make
$ sudo make install

By default, LNN is installed to `/usr/local`.
You could call `lnn --version` to check the installation.

Getting Started
---------------

The following call of LNN creates a network with
a 10-dimension input layer,
a 5-dimension hidden layer,
and a 2-dimension output layer.

$ lnn train -C q10i5s2s samples.txt >model.nn

The `-C` option creates a new model with the structure specified by the argument.
The argument here is `q10i5s2s`.
The first character `q` specifies the loss function to be the quadratic error.
The following three strings `10i`, `5s`, `2s` represent that
there are 3 layers,
including the input layer,
with dimensions 10, 5, 2, respectively.
The character following each dimension specifies the activation function for that layer.
Here `i` and `s` represent the identity function and the sigmoid function respectively ([Further Documentation](#further-documentation)).

The remaining part of this chapter assumes that
the network maps $R^n$ to $R^m$.
In words, it has a $n$-dimension input layer and $m$-dimension output layer.

LNN reads samples from the file operand, or, by default, the standard input.
The trained model is printed to the standard output in a text format.

The sample file is a text file containing numbers separated by white characters (space, tab, newline).
Each $n+m$ numbers constitute a sample.
The first $n$ numbers of a sample constitute the input vector,
and the remaining constitute the output vector.

LNN supports many training arguments like learning rate, iteration count, and batch size ([Further Documentation](#further-documentation)).

LNN could train a network based on an existed model
by replacing `-C` with `-m`.

$ lnn train -m model.nn samples.txt >model2.nn

This allows one to observe the behaviors of the model in different stages
and provide different training arguments.

The `run` sub-command runs an existed model.

$ lnn run -m model.nn input.txt

LNN reads the input vectors from the file operand, or, by default, the standard input.
The input shall contain numbers separated by white characters
(space, tab, newline).
Each $n$ numbers constitute an input vector.

The output vector of each input vector is printed to the standard output.
Each line contains an output vector.
Components of an output vector are separated by a space.

The `test` sub-command evaluates an existed model.

$ lnn test -m model.nn samples.txt

LNN reads samples from the file operand, or, by default, the standard input.
The mean loss value of the samples is printed to the standard output.
The format of the input file is the same as of the `train` sub-command.

Further Documentation
---------------------

- The [technical report](https://www.dyx.name/notes/lnn.html) serves as an extension of this read-me file.
It contains more details and examples for understanding the design and usage.

- Calling `lnn --help` prints a brief of the command-line options.
123 changes: 123 additions & 0 deletions diffable.c
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
#include <string.h>
#include <math.h>

void ident(double *y, double *x, int n)
{
memcpy(y, x, n * sizeof *y);
}

void dident(double *d, double *y, int n)
{
while (n-- > 0)
*d++ = 1;
}

void sigm(double *y, double *x, int n)
{
while (n-- > 0)
*y++ = 1 / (1 + exp(-*x++));
}

void dsigm(double *d, double *y, int n)
{
for (; n-- > 0; y++)
*d++ = *y * (1 - *y);
}

void htan(double *y, double *x, int n)
{
double h;

while (n-- > 0) {
h = exp(2 * *x++);
*y++ = (h-1)/(h+1);
}
}

void dhtan(double *d, double *y, int n)
{
for (; n-- > 0; y++)
*d++ = 1 - (*y)*(*y);
}

void relu(double *y, double *x, int n)
{
for (; n-- > 0; x++)
*y++ = *x > 0 ? *x : 0;
}

void drelu(double *d, double *y, int n)
{
while (n-- > 0)
*d++ = *y++ > 0;
}

void smax(double *y, double *x, int n)
{
double s;
int i;

for (s = i = 0; i < n; i++)
s += (y[i] = exp(x[i]));
while (n-- > 0)
*y++ /= s;
}

void dsmax(double **d, double *y, int n)
{
int i, j;

for (i = 0; i < n; i++)
for (j = 0; j < n; j++)
if (i == j)
d[i][j] = y[i]*(1-y[i]);
else
d[i][j] = -y[i]*y[j];
}

double quade(double *ov, double *tv, int n)
{
double s, d;

for (s = 0; n-- > 0; ov++, tv++) {
d = *ov - *tv;
s += d*d / 2;
}
return s;
}

void dquade(double *dv, double *ov, double *tv, int n)
{
while (n-- > 0)
*dv++ = *ov++ - *tv++;
}

double binxe(double *ov, double *tv, int n)
{
double s;

for (s = 0; n-- > 0; ov++, tv++)
s -= *tv*log(*ov) + (1-*tv)*log(1-*ov);
return s;
}

void dbinxe(double *dv, double *ov, double *tv, int n)
{
for (; n-- > 0; ov++, tv++)
*dv++ = (*ov-*tv) / (*ov*(1-*ov));
}

double xentr(double *ov, double *tv, int n)
{
double s;

while (n-- > 0)
s -= *tv++ * log(*ov++);
return s;
}

void dxentr(double *dv, double *ov, double *tv, int n)
{
while (n-- > 0)
*dv++ = -*tv++ / *ov++;
}
19 changes: 19 additions & 0 deletions diffable.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
/* differentiable functions and their derivatives */

extern void ident(double *y, double *x, int n);
extern void dident(double *d, double *y, int n);
extern void sigm(double *y, double *x, int n);
extern void dsigm(double *d, double *y, int n);
extern void htan(double *y, double *x, int n);
extern void dhtan(double *d, double *y, int n);
extern void relu(double *y, double *x, int n);
extern void drelu(double *d, double *y, int n);
extern void smax(double *y, double *x, int n);
extern void dsmax(double **d, double *y, int n);

extern double quade(double *ov, double *tv, int n);
extern void dquade(double *dv, double *ov, double *tv, int n);
extern double binxe(double *ov, double *tv, int n);
extern void dbinxe(double *dv, double *ov, double *tv, int n);
extern double xentr(double *ov, double *tv, int n);
extern void dxentr(double *dv, double *ov, double *tv, int n);
Loading

0 comments on commit 19bae5e

Please sign in to comment.