DiffSharp


Nested AD

The main functionality of DiffSharp is found under the DiffSharp.AD namespace. Opening this namespace allows you to automatically evaluate derivatives of functions via forward and/or reverse AD. Internally, for any case involving a function \(f: \mathbb{R}^n \to \mathbb{R}^m\), DiffSharp uses forward AD when \(n \ll m\) and reverse AD when \(n \gg m\). Combinations such as reverse-on-forward or forward-on-reverse AD can be also handled.

For a complete list of the available differentiation operations, please refer to API Overview and API Reference.

Background

The library supports nested invocations of differentiation operations. So, for example, you can compute exact higher-order derivatives or take derivatives of functions that are themselves internally computing derivatives.

 1: 
 2: 
 3: 
 4: 
 5: 
 6: 
 7: 
 8: 
 9: 
10: 
11: 
12: 
open DiffSharp.AD.Float64

let y x = sin (sqrt x)

// Derivative of y
let d1 = diff y

// 2nd derivative of y
let d2 = diff (diff y)

// 3rd derivative of y
let d3 = diff d2

Nesting capability means more than just being able to compute higher-order derivatives of one function.

DiffSharp can handle complex nested cases such as computing the derivative of a function \(f\) that takes an argument \(x\), which, in turn, computes the derivative of another function \(g\) nested inside \(f\) that has a free reference to \(x\), the argument to the surrounding function.

\[ \frac{d}{dx} \left. \left( x \left( \left. \frac{d}{dy} x y \; \right|_{y=3} \right) \right) \right|_{x=2}\]

1: 
let d4 = diff (fun x -> x * (diff (fun y -> x * y) (D 3.))) (D 2.)
val d4 : D = D 4.0

This allows you to write, for example, nested optimization algorithms of the form

\[ \mathbf{min} \left( \lambda x \; . \; (f \; x) + \mathbf{min} \left( \lambda y \; . \; g \; x \; y \right) \right)\; ,\]

for functions \(f\) and \(g\) and a gradient-based minimization procedure \(\mathbf{min}\).

Correctly nesting AD in a functional framework is achieved through the method of "tagging", which serves to prevent a class of bugs called "perturbation confusion" where a system fails to distinguish between distinct perturbations introduced by distinct invocations of differentiation operations. You can refer to the following articles, among others, to understand the issue and how it should be handled correctly:

Jeffrey Mark Siskind and Barak A. Pearlmutter. Perturbation Confusion and Referential Transparency: Correct Functional Implementation of Forward-Mode AD. In Proceedings of the 17th International Workshop on Implementation and Application of Functional Languages (IFL2005), Dublin, Ireland, Sep. 19-21, 2005.

Jeffrey Mark Siskind and Barak A. Pearlmutter. Nesting forward-mode AD in a functional framework. Higher Order and Symbolic Computation 21(4):361-76, 2008. doi:10.1007/s10990-008-9037-1

Barak A. Pearlmutter and Jeffrey Mark Siskind. Reverse-Mode AD in a functional framework: Lambda the ultimate backpropagator. TOPLAS 30(2):1-36, Mar. 2008. doi:10.1145/1330017.1330018

Forward and Reverse AD Operations

DiffSharp automatically selects forward or reverse AD, or a combination of these, for a given operation.

The following are just a small selection of operations.

 1: 
 2: 
 3: 
 4: 
 5: 
 6: 
 7: 
 8: 
 9: 
10: 
11: 
12: 
13: 
14: 
15: 
16: 
17: 
18: 
19: 
20: 
21: 
22: 
23: 
24: 
25: 
26: 
27: 
28: 
29: 
30: 
31: 
32: 
33: 
34: 
35: 
36: 
37: 
38: 
39: 
40: 
41: 
42: 
open DiffSharp.AD.Float64

// f: D -> D
let f (x:D) = sin (3. * sqrt x)

// Derivative of f at 2
// Uses forward AD
let df = diff f (D 2.)

// g: DV -> D
let g (x:DV) = sin (x.[0] * x.[1])

// Directional derivative of g at (2, 3) with direction (4, 1)
// Uses forward AD
let ddg = gradv g (toDV [2.; 3.]) (toDV [4.; 1.])

// Gradient of g at (2, 3)
// Uses reverse AD
let gg = grad g (toDV [2.; 3.])

// Hessian-vector product of g at (2, 3) with vector (4, 1)
// Uses reverse-on-forward AD
let hvg = hessianv g (toDV [2.; 3.]) (toDV [4.; 1.])

// Hessian of g at (2, 3)
// Uses reverse-on-forward AD
let hg = hessian g (toDV [2.; 3.])

// h: DV -> DV
let h (x:DV) = toDV [sin x.[0]; cos x.[1]]

// Jacobian-vector product of h at (2, 3) with vector (4, 1)
// Uses forward AD
let jvh = jacobianv h (toDV [2.; 3.]) (toDV [4.; 1.])

// Transposed Jacobian-vector product of h at (2, 3) with vector (4, 1)
// Uses reverse AD
let tjvh = jacobianTv h (toDV [2.; 3.]) (toDV [4.; 1.])

// Jacobian of h at (2, 3)
// Uses forward or reverse AD depending on the number of inputs and outputs
let jh = jacobian h (toDV [2.; 3.])

Using the Reverse AD Trace

In addition to the high-level differentiation API that uses reverse AD (such as grad, jacobianTv ), you can make use of the exposed low-level trace functionality. Reverse AD automatically builds a global trace (or "tape", in AD literature) of all executed numeric operations, which allows a subsequent reverse sweep of these operations for propagating adjoint values in reverse.

The technique is equivalent to the backpropagation method commonly used for training artificial neural networks, which is essentially just a special case of reverse AD. (You can see an implementation of the backpropagation algorithm using reverse AD in the neural networks example.)

For example, consider the computation

\[ e = (\sin a) (a + b) \; ,\]

using the values \(a = 0.5\) and \(b = 1.2\).

During the execution of a program, this computation is carried out by the sequence of operations

\[ \begin{eqnarray*} a &=& 0.5 \; ,\\ b &=& 1.2 \; , \\ c &=& \sin a \; , \\ d &=& a + b \; , \\ e &=& c \times d \; , \end{eqnarray*}\]

the dependencies between which can be represented by the computational graph below.

Chart

Reverse mode AD works by propagating adjoint values from the output (e.g. \(\bar{e} = \frac{\partial e}{\partial e}\)) towards the inputs (e.g. \(\bar{a} = \frac{\partial e}{\partial a}\) and \(\bar{b} = \frac{\partial e}{\partial b}\)), using adjoint propagation rules dictated by the dependencies in the computational graph:

\[ \begin{eqnarray*} \bar{d} &=& \frac{\partial e}{\partial d} &=& \frac{\partial e}{\partial e} \frac{\partial e}{\partial d} &=& \bar{e} c\; , \\ \bar{c} &=& \frac{\partial e}{\partial c} &=& \frac{\partial e}{\partial e} \frac{\partial e}{\partial c} &=& \bar{e} d\; , \\ \bar{b} &=& \frac{\partial e}{\partial b} &=& \frac{\partial e}{\partial d} \frac{\partial d}{\partial b} &=& \bar{d} \; , \\ \bar{a} &=& \frac{\partial e}{\partial a} &=& \frac{\partial e}{\partial c} \frac{\partial c}{\partial a} + \frac{\partial e}{\partial d} \frac{\partial d}{\partial a} &=& \bar{c} (\cos a) + \bar{d} \; .\\ \end{eqnarray*}\]

In order to write code using low-level AD functionality, you should understand the tagging method used for avoiding perturbation confusion. Users would need to write such code sporadically. The normal way of interacting with the library is through the high-level differentiation API, which handles these issues internally.

You can get access to adjoints as follows.

 1: 
 2: 
 3: 
 4: 
 5: 
 6: 
 7: 
 8: 
 9: 
10: 
11: 
12: 
13: 
14: 
15: 
16: 
17: 
18: 
19: 
20: 
open DiffSharp.AD.Float64

// Get a fresh global tag for this run of reverse AD
let i = DiffSharp.Util.GlobalTagger.Next

// Initialize input values for reverse AD
let a = D 0.5 |> makeReverse i
let b = D 1.2 |> makeReverse i

// Perform a series of operations involving the D type
let e = (sin a) * (a + b)

// Propagate the adjoint value of 1 backward from e (or de/de = 1)
// i.e., calculate partial derivatives of e with respect to other variables
e |> reverseProp (D 1.)

// Read the adjoint values of the inputs
// You can calculate all partial derivatives in just one reverse sweep!
let deda = a.A
let dedb = b.A
val a : D = DR (D 0.5,{contents = D 1.971315894;},Noop,{contents = 0u;},3657u)
val b : D =
  DR (D 1.2,{contents = D 0.4794255386;},Noop,{contents = 0u;},3657u)
val e : D =
  DR
    (D 0.8150234156,{contents = D 1.0;},
     Mul_D_D
       (DR
          (D 0.4794255386,{contents = D 1.7;},
           Sin_D
             (DR
                (D 0.5,{contents = D 1.971315894;},Noop,{contents = 0u;},3657u)),
           {contents = 0u;},3657u),
        DR
          (D 1.7,{contents = D 0.4794255386;},
           Add_D_D
             (DR
                (D 0.5,{contents = D 1.971315894;},Noop,{contents = 0u;},3657u),
              DR
                (D 1.2,{contents = D 0.4794255386;},Noop,{contents = 0u;},
                 3657u)),{contents = 0u;},3657u)),{contents = 0u;},3657u)
val deda : D = D 1.971315894
val dedb : D = D 0.4794255386

In addition to the partial derivatives of the dependent variable \(e\) with respect to the independent variables \(a\) and \(b\), you can also extract the partial derivatives of \(e\) with respect to any intermediate variable involved in this computation.

 1: 
 2: 
 3: 
 4: 
 5: 
 6: 
 7: 
 8: 
 9: 
10: 
11: 
12: 
13: 
14: 
15: 
16: 
17: 
18: 
19: 
20: 
21: 
// Get a fresh global tag for this run of reverse AD
let i' = DiffSharp.Util.GlobalTagger.Next

// Initialize input values for reverse AD
let a' = D 0.5 |> makeReverse i'
let b' = D 1.2 |> makeReverse i'

// Perform a series of operations involving the D type
let c' = sin a'
let d' = a' + b'
let e' = c' * d' // e' = (sin a') * (a' + b')

// Propagate the adjoint value of 1 backward from e
e' |> reverseProp (D 1.)

// Read the adjoint values
// You can calculate all partial derivatives in just one reverse sweep!
let de'da' = a'.A
let de'db' = b'.A
let de'dc' = c'.A
let de'dd' = d'.A
val a' : D =
  DR (D 0.5,{contents = D 1.971315894;},Noop,{contents = 0u;},3659u)
val b' : D =
  DR (D 1.2,{contents = D 0.4794255386;},Noop,{contents = 0u;},3659u)
val c' : D =
  DR
    (D 0.4794255386,{contents = D 1.7;},
     Sin_D
       (DR (D 0.5,{contents = D 1.971315894;},Noop,{contents = 0u;},3659u)),
     {contents = 0u;},3659u)
val d' : D =
  DR
    (D 1.7,{contents = D 0.4794255386;},
     Add_D_D
       (DR (D 0.5,{contents = D 1.971315894;},Noop,{contents = 0u;},3659u),
        DR (D 1.2,{contents = D 0.4794255386;},Noop,{contents = 0u;},3659u)),
     {contents = 0u;},3659u)
val e' : D =
  DR
    (D 0.8150234156,{contents = D 1.0;},
     Mul_D_D
       (DR
          (D 0.4794255386,{contents = D 1.7;},
           Sin_D
             (DR
                (D 0.5,{contents = D 1.971315894;},Noop,{contents = 0u;},3659u)),
           {contents = 0u;},3659u),
        DR
          (D 1.7,{contents = D 0.4794255386;},
           Add_D_D
             (DR
                (D 0.5,{contents = D 1.971315894;},Noop,{contents = 0u;},3659u),
              DR
                (D 1.2,{contents = D 0.4794255386;},Noop,{contents = 0u;},
                 3659u)),{contents = 0u;},3659u)),{contents = 0u;},3659u)
val de'da' : D = D 1.971315894
val de'db' : D = D 0.4794255386
val de'dc' : D = D 1.7
val de'dd' : D = D 0.4794255386
namespace DiffSharp
namespace DiffSharp.AD
module Float64

from DiffSharp.AD
val y : x:D -> D

Full name: Gettingstarted-nestedad.y
val x : D
val sin : value:'T -> 'T (requires member Sin)

Full name: Microsoft.FSharp.Core.Operators.sin
val sqrt : value:'T -> 'U (requires member Sqrt)

Full name: Microsoft.FSharp.Core.Operators.sqrt
val d1 : (D -> D)

Full name: Gettingstarted-nestedad.d1
val diff : f:(D -> 'c) -> x:D -> 'c (requires member get_P and member get_T)

Full name: DiffSharp.AD.Float64.DiffOps.diff
val d2 : (D -> D)

Full name: Gettingstarted-nestedad.d2
val d3 : (D -> D)

Full name: Gettingstarted-nestedad.d3
val d4 : D

Full name: Gettingstarted-nestedad.d4
val y : D
union case D.D: float -> D
val printf : format:Printf.TextWriterFormat<'T> -> 'T

Full name: Microsoft.FSharp.Core.ExtraTopLevelOperators.printf
val f : x:D -> D

Full name: Gettingstarted-nestedad.f
type D =
  | D of float
  | DF of D * D * uint32
  | DR of D * D ref * TraceOp * uint32 ref * uint32
  interface IComparable
  member Copy : unit -> D
  override Equals : other:obj -> bool
  member GetForward : t:D * i:uint32 -> D
  override GetHashCode : unit -> int
  member GetReverse : i:uint32 -> D
  override ToString : unit -> string
  member A : D
  member F : uint32
  member P : D
  member PD : D
  member T : D
  member A : D with set
  member F : uint32 with set
  static member Abs : a:D -> D
  static member Acos : a:D -> D
  static member Asin : a:D -> D
  static member Atan : a:D -> D
  static member Atan2 : a:int * b:D -> D
  static member Atan2 : a:D * b:int -> D
  static member Atan2 : a:float * b:D -> D
  static member Atan2 : a:D * b:float -> D
  static member Atan2 : a:D * b:D -> D
  static member Ceiling : a:D -> D
  static member Cos : a:D -> D
  static member Cosh : a:D -> D
  static member Exp : a:D -> D
  static member Floor : a:D -> D
  static member Log : a:D -> D
  static member Log10 : a:D -> D
  static member LogSumExp : a:D -> D
  static member Max : a:D * b:D -> D
  static member Min : a:D * b:D -> D
  static member Op_D_D : a:D * ff:(float -> float) * fd:(D -> D) * df:(D * D * D -> D) * r:(D -> TraceOp) -> D
  static member Op_D_D_D : a:D * b:D * ff:(float * float -> float) * fd:(D * D -> D) * df_da:(D * D * D -> D) * df_db:(D * D * D -> D) * df_dab:(D * D * D * D * D -> D) * r_d_d:(D * D -> TraceOp) * r_d_c:(D * D -> TraceOp) * r_c_d:(D * D -> TraceOp) -> D
  static member Pow : a:int * b:D -> D
  static member Pow : a:D * b:int -> D
  static member Pow : a:float * b:D -> D
  static member Pow : a:D * b:float -> D
  static member Pow : a:D * b:D -> D
  static member ReLU : a:D -> D
  static member Round : a:D -> D
  static member Sigmoid : a:D -> D
  static member Sign : a:D -> D
  static member Sin : a:D -> D
  static member Sinh : a:D -> D
  static member SoftPlus : a:D -> D
  static member SoftSign : a:D -> D
  static member Sqrt : a:D -> D
  static member Tan : a:D -> D
  static member Tanh : a:D -> D
  static member One : D
  static member Zero : D
  static member ( + ) : a:int * b:D -> D
  static member ( + ) : a:D * b:int -> D
  static member ( + ) : a:float * b:D -> D
  static member ( + ) : a:D * b:float -> D
  static member ( + ) : a:D * b:D -> D
  static member ( / ) : a:int * b:D -> D
  static member ( / ) : a:D * b:int -> D
  static member ( / ) : a:float * b:D -> D
  static member ( / ) : a:D * b:float -> D
  static member ( / ) : a:D * b:D -> D
  static member op_Explicit : d:D -> float
  static member ( * ) : a:int * b:D -> D
  static member ( * ) : a:D * b:int -> D
  static member ( * ) : a:float * b:D -> D
  static member ( * ) : a:D * b:float -> D
  static member ( * ) : a:D * b:D -> D
  static member ( - ) : a:int * b:D -> D
  static member ( - ) : a:D * b:int -> D
  static member ( - ) : a:float * b:D -> D
  static member ( - ) : a:D * b:float -> D
  static member ( - ) : a:D * b:D -> D
  static member ( ~- ) : a:D -> D

Full name: DiffSharp.AD.Float64.D
val df : D

Full name: Gettingstarted-nestedad.df
val g : x:DV -> D

Full name: Gettingstarted-nestedad.g
val x : DV
Multiple items
union case DV.DV: float [] -> DV

--------------------
module DV

from DiffSharp.AD.Float64

--------------------
type DV =
  | DV of float []
  | DVF of DV * DV * uint32
  | DVR of DV * DV ref * TraceOp * uint32 ref * uint32
  member Copy : unit -> DV
  member GetForward : t:DV * i:uint32 -> DV
  member GetReverse : i:uint32 -> DV
  member GetSlice : lower:int option * upper:int option -> DV
  member ToArray : unit -> D []
  member ToColDM : unit -> DM
  member ToMathematicaString : unit -> string
  member ToMatlabString : unit -> string
  member ToRowDM : unit -> DM
  override ToString : unit -> string
  member Visualize : unit -> string
  member A : DV
  member F : uint32
  member Item : i:int -> D with get
  member Length : int
  member P : DV
  member PD : DV
  member T : DV
  member A : DV with set
  member F : uint32 with set
  static member Abs : a:DV -> DV
  static member Acos : a:DV -> DV
  static member AddItem : a:DV * i:int * b:D -> DV
  static member AddSubVector : a:DV * i:int * b:DV -> DV
  static member Append : a:DV * b:DV -> DV
  static member Asin : a:DV -> DV
  static member Atan : a:DV -> DV
  static member Atan2 : a:int * b:DV -> DV
  static member Atan2 : a:DV * b:int -> DV
  static member Atan2 : a:float * b:DV -> DV
  static member Atan2 : a:DV * b:float -> DV
  static member Atan2 : a:D * b:DV -> DV
  static member Atan2 : a:DV * b:D -> DV
  static member Atan2 : a:DV * b:DV -> DV
  static member Ceiling : a:DV -> DV
  static member Cos : a:DV -> DV
  static member Cosh : a:DV -> DV
  static member Exp : a:DV -> DV
  static member Floor : a:DV -> DV
  static member L1Norm : a:DV -> D
  static member L2Norm : a:DV -> D
  static member L2NormSq : a:DV -> D
  static member Log : a:DV -> DV
  static member Log10 : a:DV -> DV
  static member LogSumExp : a:DV -> D
  static member Max : a:DV -> D
  static member Max : a:D * b:DV -> DV
  static member Max : a:DV * b:D -> DV
  static member Max : a:DV * b:DV -> DV
  static member MaxIndex : a:DV -> int
  static member Mean : a:DV -> D
  static member Min : a:DV -> D
  static member Min : a:D * b:DV -> DV
  static member Min : a:DV * b:D -> DV
  static member Min : a:DV * b:DV -> DV
  static member MinIndex : a:DV -> int
  static member Normalize : a:DV -> DV
  static member OfArray : a:D [] -> DV
  static member Op_DV_D : a:DV * ff:(float [] -> float) * fd:(DV -> D) * df:(D * DV * DV -> D) * r:(DV -> TraceOp) -> D
  static member Op_DV_DM : a:DV * ff:(float [] -> float [,]) * fd:(DV -> DM) * df:(DM * DV * DV -> DM) * r:(DV -> TraceOp) -> DM
  static member Op_DV_DV : a:DV * ff:(float [] -> float []) * fd:(DV -> DV) * df:(DV * DV * DV -> DV) * r:(DV -> TraceOp) -> DV
  static member Op_DV_DV_D : a:DV * b:DV * ff:(float [] * float [] -> float) * fd:(DV * DV -> D) * df_da:(D * DV * DV -> D) * df_db:(D * DV * DV -> D) * df_dab:(D * DV * DV * DV * DV -> D) * r_d_d:(DV * DV -> TraceOp) * r_d_c:(DV * DV -> TraceOp) * r_c_d:(DV * DV -> TraceOp) -> D
  static member Op_DV_DV_DM : a:DV * b:DV * ff:(float [] * float [] -> float [,]) * fd:(DV * DV -> DM) * df_da:(DM * DV * DV -> DM) * df_db:(DM * DV * DV -> DM) * df_dab:(DM * DV * DV * DV * DV -> DM) * r_d_d:(DV * DV -> TraceOp) * r_d_c:(DV * DV -> TraceOp) * r_c_d:(DV * DV -> TraceOp) -> DM
  static member Op_DV_DV_DV : a:DV * b:DV * ff:(float [] * float [] -> float []) * fd:(DV * DV -> DV) * df_da:(DV * DV * DV -> DV) * df_db:(DV * DV * DV -> DV) * df_dab:(DV * DV * DV * DV * DV -> DV) * r_d_d:(DV * DV -> TraceOp) * r_d_c:(DV * DV -> TraceOp) * r_c_d:(DV * DV -> TraceOp) -> DV
  static member Op_DV_D_DV : a:DV * b:D * ff:(float [] * float -> float []) * fd:(DV * D -> DV) * df_da:(DV * DV * DV -> DV) * df_db:(DV * D * D -> DV) * df_dab:(DV * DV * DV * D * D -> DV) * r_d_d:(DV * D -> TraceOp) * r_d_c:(DV * D -> TraceOp) * r_c_d:(DV * D -> TraceOp) -> DV
  static member Op_D_DV_DV : a:D * b:DV * ff:(float * float [] -> float []) * fd:(D * DV -> DV) * df_da:(DV * D * D -> DV) * df_db:(DV * DV * DV -> DV) * df_dab:(DV * D * D * DV * DV -> DV) * r_d_d:(D * DV -> TraceOp) * r_d_c:(D * DV -> TraceOp) * r_c_d:(D * DV -> TraceOp) -> DV
  static member Pow : a:int * b:DV -> DV
  static member Pow : a:DV * b:int -> DV
  static member Pow : a:float * b:DV -> DV
  static member Pow : a:DV * b:float -> DV
  static member Pow : a:D * b:DV -> DV
  static member Pow : a:DV * b:D -> DV
  static member Pow : a:DV * b:DV -> DV
  static member ReLU : a:DV -> DV
  static member ReshapeToDM : m:int * a:DV -> DM
  static member Round : a:DV -> DV
  static member Sigmoid : a:DV -> DV
  static member Sign : a:DV -> DV
  static member Sin : a:DV -> DV
  static member Sinh : a:DV -> DV
  static member SoftMax : a:DV -> DV
  static member SoftPlus : a:DV -> DV
  static member SoftSign : a:DV -> DV
  static member Split : d:DV * n:seq<int> -> seq<DV>
  static member Sqrt : a:DV -> DV
  static member StandardDev : a:DV -> D
  static member Standardize : a:DV -> DV
  static member Sum : a:DV -> D
  static member Tan : a:DV -> DV
  static member Tanh : a:DV -> DV
  static member Variance : a:DV -> D
  static member ZeroN : n:int -> DV
  static member Zero : DV
  static member ( + ) : a:int * b:DV -> DV
  static member ( + ) : a:DV * b:int -> DV
  static member ( + ) : a:float * b:DV -> DV
  static member ( + ) : a:DV * b:float -> DV
  static member ( + ) : a:D * b:DV -> DV
  static member ( + ) : a:DV * b:D -> DV
  static member ( + ) : a:DV * b:DV -> DV
  static member ( &* ) : a:DV * b:DV -> DM
  static member ( / ) : a:int * b:DV -> DV
  static member ( / ) : a:DV * b:int -> DV
  static member ( / ) : a:float * b:DV -> DV
  static member ( / ) : a:DV * b:float -> DV
  static member ( / ) : a:D * b:DV -> DV
  static member ( / ) : a:DV * b:D -> DV
  static member ( ./ ) : a:DV * b:DV -> DV
  static member ( .* ) : a:DV * b:DV -> DV
  static member op_Explicit : d:float [] -> DV
  static member op_Explicit : d:DV -> float []
  static member ( * ) : a:int * b:DV -> DV
  static member ( * ) : a:DV * b:int -> DV
  static member ( * ) : a:float * b:DV -> DV
  static member ( * ) : a:DV * b:float -> DV
  static member ( * ) : a:D * b:DV -> DV
  static member ( * ) : a:DV * b:D -> DV
  static member ( * ) : a:DV * b:DV -> D
  static member ( - ) : a:int * b:DV -> DV
  static member ( - ) : a:DV * b:int -> DV
  static member ( - ) : a:float * b:DV -> DV
  static member ( - ) : a:DV * b:float -> DV
  static member ( - ) : a:D * b:DV -> DV
  static member ( - ) : a:DV * b:D -> DV
  static member ( - ) : a:DV * b:DV -> DV
  static member ( ~- ) : a:DV -> DV

Full name: DiffSharp.AD.Float64.DV
val ddg : D

Full name: Gettingstarted-nestedad.ddg
val gradv : f:('c -> 'd) -> x:'c -> v:'c -> 'd (requires member GetForward and member get_P and member get_T)

Full name: DiffSharp.AD.Float64.DiffOps.gradv
val toDV : v:seq<'a> -> DV (requires member op_Explicit)

Full name: DiffSharp.AD.Float64.DOps.toDV
val gg : DV

Full name: Gettingstarted-nestedad.gg
val grad : f:('c -> D) -> x:'c -> 'c (requires member GetReverse and member get_A)

Full name: DiffSharp.AD.Float64.DiffOps.grad
val hvg : DV

Full name: Gettingstarted-nestedad.hvg
val hessianv : f:('c -> D) -> x:'c -> v:'c -> 'c (requires member GetReverse and member get_A and member GetForward)

Full name: DiffSharp.AD.Float64.DiffOps.hessianv
val hg : DM

Full name: Gettingstarted-nestedad.hg
val hessian : f:(DV -> D) -> x:DV -> DM

Full name: DiffSharp.AD.Float64.DiffOps.hessian
val h : x:DV -> DV

Full name: Gettingstarted-nestedad.h
val cos : value:'T -> 'T (requires member Cos)

Full name: Microsoft.FSharp.Core.Operators.cos
val jvh : DV

Full name: Gettingstarted-nestedad.jvh
val jacobianv : f:('c -> 'd) -> x:'c -> v:'c -> 'd (requires member GetForward and member get_P and member get_T)

Full name: DiffSharp.AD.Float64.DiffOps.jacobianv
val tjvh : DV

Full name: Gettingstarted-nestedad.tjvh
val jacobianTv : f:('c -> 'd) -> x:'c -> v:'d -> 'c (requires member get_A and member GetReverse and member get_P)

Full name: DiffSharp.AD.Float64.DiffOps.jacobianTv
val jh : DM

Full name: Gettingstarted-nestedad.jh
val jacobian : f:(DV -> DV) -> x:DV -> DM

Full name: DiffSharp.AD.Float64.DiffOps.jacobian
val i : uint32

Full name: Gettingstarted-nestedad.i
module Util

from DiffSharp
Multiple items
type GlobalTagger =
  new : unit -> GlobalTagger
  static member Next : uint32
  static member Reset : unit

Full name: DiffSharp.Util.GlobalTagger

--------------------
new : unit -> DiffSharp.Util.GlobalTagger
property DiffSharp.Util.GlobalTagger.Next: uint32
val a : D

Full name: Gettingstarted-nestedad.a
val makeReverse : i:uint32 -> p:'a -> 'a (requires member GetReverse)

Full name: DiffSharp.AD.Float64.DOps.makeReverse
val b : D

Full name: Gettingstarted-nestedad.b
val e : D

Full name: Gettingstarted-nestedad.e
val reverseProp : v:obj -> d:obj -> unit

Full name: DiffSharp.AD.Float64.DOps.reverseProp
val deda : D

Full name: Gettingstarted-nestedad.deda
property D.A: D
val dedb : D

Full name: Gettingstarted-nestedad.dedb
val i' : uint32

Full name: Gettingstarted-nestedad.i'
val a' : D

Full name: Gettingstarted-nestedad.a'
val b' : D

Full name: Gettingstarted-nestedad.b'
val c' : D

Full name: Gettingstarted-nestedad.c'
val d' : D

Full name: Gettingstarted-nestedad.d'
val e' : D

Full name: Gettingstarted-nestedad.e'
val de'da' : D

Full name: Gettingstarted-nestedad.de'da'
val de'db' : D

Full name: Gettingstarted-nestedad.de'db'
val de'dc' : D

Full name: Gettingstarted-nestedad.de'dc'
val de'dd' : D

Full name: Gettingstarted-nestedad.de'dd'