Skip to content

Commit

Permalink
v1.1.2
Browse files Browse the repository at this point in the history
- `ActivationFunctionSigmoid`:
  - Changed to use new faster `dart:math.exp` function.
  • Loading branch information
gmpassos committed May 30, 2021
1 parent a9dcbb4 commit 99e84d4
Show file tree
Hide file tree
Showing 5 changed files with 64 additions and 34 deletions.
5 changes: 5 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
## 1.1.2

- `ActivationFunctionSigmoid`:
- Changed to use new faster `dart:math.exp` function.

## 1.1.1

- `ActivationFunction`:
Expand Down
28 changes: 18 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,8 @@ Backpropagation{elapsedTime: 111 ms, hertz: 83495.49549549549 Hz, ops: 9268, sta
# SIMD (Single Instruction Multiple Data)

Dart has support for SIMD when computation is made using [Float32x4] and [Int32x4].
The Activation Functions are implemented using [Float32x4], improving
performance by 1.5x to 2x, when compared to normal implementation.

The basic principle with SIMD is to execute math operations simultaneously in 4 numbers.

Expand Down Expand Up @@ -220,36 +222,42 @@ is an experimental implementation to exercise an `ANN` based in integers.
[e][dart_math_e], to the power x.

This is an important `ANN` function, since is used by the popular
Sigmoid function, and usually a high precision version is slow,
but a high precision version is actually not necessary
for Artificial Neural Networks, opening the opportunity
for implementations that are just an approximation.
Sigmoid function, and usually a high precision version is slow
and approximation versions can be used for most `ANN` models and training
algorithms.

[dart_math_e]: https://api.dart.dev/stable/2.12.1/dart-math/e-constant.html
[dart_math_exp]: https://api.dart.dev/stable/2.12.1/dart-math/exp.html

# Fast Math

An internal fast math library is used to compute `ActivationFunctionSigmoid`.
An internal Fast Math library is present and can be used for platforms
that are not efficient to compute `exp` (Exponential function).

If you want you can import this library and use it in your projects:
You can import this library and use it to create a specialized
`ActivationFunction` implementation or use it in any kind of project:

```dart
import 'package:eneural_net/eneural_net_fast_math.dart' as fast_math ;
void main() {
// Fast `exp` function:
fast_math.exp(2);
// Fast Exponential function:
var o = fast_math.exp(2);
// Fast high precision `exp` function:
// Fast Exponential function with high precision:
var highPrecision = <double>[0.0 , 0.0];
fast_math.expHighPrecision(2, 0.0, highPrecision);
var oHighPrecision = fast_math.expHighPrecision(2, 0.0, highPrecision);
// Fast Exponential function with SIMD acceleration:
var o32x4 = fast_math.expFloat32x4( Float32x4(2,3,4,5) );
}
```

The implementation is based in the Dart package [Complex](https://pub.dev/packages/complex):
- https://github.com/rwl/complex/blob/master/lib/src/fastmath.dart

The `fast_math.expFloat32x4` function was created by Graciliano M. Passos ([gmpassos@GitHub][github]).

# eNeural.net

You can find more at: [eneural.net][eNeural.net]
Expand Down
41 changes: 20 additions & 21 deletions example/eneural_net_benchmark_activation_functions.dart
Original file line number Diff line number Diff line change
Expand Up @@ -2,34 +2,29 @@ import 'dart:typed_data';

import 'package:eneural_net/eneural_net.dart';

void main() {
void main() async {
var totalOperations = 40000000;

var activationFunctions = <ActivationFunction<double, Float32x4>>[
ActivationFunctionSigmoid(),
ActivationFunctionSigmoidFast(),
ActivationFunctionSigmoidBoundedFast(),
];

var allBenchmarks = <Chronometer?>[];

for (var i = 0; i < 10; ++i) {
for (var i = 0; i < 3; ++i) {
print('\n==========================================================\n');

var benchmarks = <Chronometer>[];

benchmark(benchmarks, 3,
() => runAnnFloat32x4(totalOperations, ActivationFunctionSigmoid()));

print('------------------\n');

benchmark(
benchmarks,
3,
() =>
runAnnFloat32x4(totalOperations, ActivationFunctionSigmoidFast()));

print('------------------\n');
for (var af in activationFunctions) {
var bestResult =
benchmark(benchmarks, 3, () => runAnnFloat32x4(totalOperations, af));

benchmark(
benchmarks,
3,
() => runAnnFloat32x4(
totalOperations, ActivationFunctionSigmoidBoundedFast()));
print('bestResult: $bestResult > $af');
print('------------------\n');
}

print('----------------------------------------------------------\n');

Expand All @@ -48,7 +43,7 @@ void main() {
}
}

void benchmark(List<Chronometer> allBenchmarks, int sessions,
Chronometer benchmark(List<Chronometer> allBenchmarks, int sessions,
Chronometer Function() runner) {
var results = <Chronometer>[];

Expand All @@ -58,7 +53,11 @@ void benchmark(List<Chronometer> allBenchmarks, int sessions,
}

results.sort();
allBenchmarks.add(results.last);

var bestResult = results.last;
allBenchmarks.add(bestResult);

return bestResult;
}

var in1 = Float32x4(0.0, 0.25, 0.50, 1.0);
Expand Down
22 changes: 20 additions & 2 deletions lib/src/eneural_net_activation_functions.dart
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import 'dart:math';
import 'dart:typed_data';

import 'eneural_net_fastmath.dart' as fast_math;
//import 'eneural_net_fastmath.dart' as fast_math;

/// Scope of the activation function.
enum ActivationFunctionScope {
Expand Down Expand Up @@ -167,15 +167,33 @@ class ActivationFunctionSigmoid extends ActivationFunctionFloat32x4 {

@override
double activate(double x) {
return 1 / (1 + fast_math.exp(-x));
return 1 / (1 + exp(-x));
//return 1 / (1 + fast_math.exp(-x));
}

@override
Float32x4 activateEntry(Float32x4 entry) {
// New Dart v2.13.1 implementation of `exp` is very fast:
var exp32x4 = Float32x4(
exp(-entry.x),
exp(-entry.y),
exp(-entry.z),
exp(-entry.w),
);

return ActivationFunctionFloat32x4.entryOfOnes /
(ActivationFunctionFloat32x4.entryOfOnes + exp32x4);

/*
// SIMD version with `fast_math.expFloat32x4`
return ActivationFunctionFloat32x4.entryOfOnes /
(ActivationFunctionFloat32x4.entryOfOnes +
fast_math.expFloat32x4(-entry));
*/

/*
// Non-SIMD version:
return Float32x4(
activate(entry.x),
activate(entry.y),
Expand Down
2 changes: 1 addition & 1 deletion pubspec.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: eneural_net
description: AI Library to create efficient Artificial Neural Networks. Computation uses SIMD (Single Instruction Multiple Data) to improve performance.
version: 1.1.1
version: 1.1.2
homepage: https://eneural.net/

environment:
Expand Down

0 comments on commit 99e84d4

Please sign in to comment.