DiffSharp


DiffSharp: Automatic Differentiation Library

DiffSharp is an automatic differentiation (AD) library.

AD allows exact and efficient calculation of derivatives, by systematically invoking the chain rule of calculus at the elementary operator level during program execution. AD is different from numerical differentiation, which is prone to truncation and round-off errors, and symbolic differentiation, which is affected by expression swell and cannot fully handle algorithmic control flow.

Using the DiffSharp library, derivative calculations (gradients, Hessians, Jacobians, directional derivatives, and matrix-free Hessian- and Jacobian-vector products) can be incorporated with minimal change into existing algorithms. Operations can be nested to any level, meaning that you can compute exact higher-order derivatives and differentiate functions that are internally making use of differentiation. Please see the API Overview page for a list of available operations.

The library is under active development by Atılım Güneş Baydin and Barak A. Pearlmutter mainly for research applications in machine learning, as part of their work at the Brain and Computation Lab, Hamilton Institute, National University of Ireland Maynooth.

DiffSharp is implemented in the F# language and can be used from C# and the other languages running on Mono or the .Net Framework. It is tested on Linux, Windows, and Mac OS X. We are working on interfaces/ports to other languages.

As of version 0.6, DiffSharp supports nesting of AD operations. This entails important changes in the library structure. Please see the release notes to learn about the changes and how you can update your code.

Roadmap

At this point we are debugging algorithmic complexity and the APIs, while much better overhead factors, parallelization, and GPU support are planned for a later release. We are hoping the community will help us get the API right and ensure that the latest models can make use of DiffSharp as succinctly and as cleanly as possible, which would make it convenient to use the system in production.

We are working on the following features:

  • Native linear algebra providers (Intel MKL and CUDA) for faster matrix-vector operations
  • Improved Hessian calculations exploiting structure (sparsity)
  • AD via code transformation, using code quotations

How to Get

You can install the library via NuGet. You can also download the source code or the binaries of the latest release on GitHub.

The DiffSharp library is available on NuGet. To install, run the following command in the Package Manager Console:
PM> Install-Package DiffSharp

Quick Usage Example

 1: 
 2: 
 3: 
 4: 
 5: 
 6: 
 7: 
 8: 
 9: 
10: 
11: 
12: 
13: 
14: 
15: 
16: 
17: 
// Use mixed mode nested AD
open DiffSharp.AD

// A scalar-to-scalar function
let f x = sin (sqrt x)

// Derivative of f
let df = diff f

// A vector-to-scalar function
let g (x:_[]) = exp (x.[0] * x.[1]) + x.[2]

// Gradient of g
let gg = grad g 

// Hessian of g
let hg = hessian g

More Info and How to Cite

If you are using the library and would like to cite it, please use the following information:

Atılım Güneş Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul, Jeffrey Mark Siskind (2015) Automatic differentiation and machine learning: a survey. arXiv preprint. arXiv:1502.05767 (link) (BibTeX)

You can also check our recent poster for the MLOSS Workshop at the International Conference on Machine Learning 2015. For in-depth material, you can check our publications page and the autodiff.org website.

If you are using DiffSharp, we would be very happy to hear about it and put a link to your work on this page.

namespace DiffSharp
namespace DiffSharp.AD
val f : x:D -> D

Full name: Index.f
val x : D
val sin : value:'T -> 'T (requires member Sin)

Full name: Microsoft.FSharp.Core.Operators.sin
val sqrt : value:'T -> 'U (requires member Sqrt)

Full name: Microsoft.FSharp.Core.Operators.sqrt
val df : (D -> D)

Full name: Index.df
val diff : f:(D -> D) -> x:D -> D

Full name: DiffSharp.AD.DiffOps.diff
val g : x:D [] -> D

Full name: Index.g
val x : D []
val exp : value:'T -> 'T (requires member Exp)

Full name: Microsoft.FSharp.Core.Operators.exp
val gg : (D [] -> D [])

Full name: Index.gg
val grad : f:(D [] -> D) -> x:D [] -> D []

Full name: DiffSharp.AD.DiffOps.grad
val hg : (D [] -> D [,])

Full name: Index.hg
val hessian : f:(D [] -> D) -> x:D [] -> D [,]

Full name: DiffSharp.AD.DiffOps.hessian