Hot questions for Using Neural networks in dlib

Question:

It seems that dlib needs a loss layer that dictates how the layers most distant to our input layer are treated. I cannot find any documentation towards the loss layers but it seems that there is no way to have just some summation layer.

Summing up all the values of the last layer would be exactly what I need for the regression, though (see also: https://deeplearning4j.org/linear-regression)

I was thinking along the lines of writing a custom loss layer but could not find information about this, either.

So, have I overseen some corresponding layer here or is there a possibility to have what I need?


Answer:

The loss layers in dlib are listed in the menu on dlib's machine learning page. Look for the words "loss layers". There is lots of documentation.

The current released version of dlib doesn't include a regression loss. However, if you get the current code from github you can use the new loss_mean_squared layer to do regression. See: https://github.com/davisking/dlib/blob/master/dlib/dnn/loss_abstract.h

Question:

In dlib you can construct a neural network as shown in this example. It utilizes using to define its structure, then create an instance of it like this:

using net_type = [...] ;
net_type net;

After that point how can a hidden layer be added, removed or resized at runtime? There are things like add_layer, but neural nets in dlib are heavily templated and I dont't know, if and how add_layer or maybe layer to access layers might help.

To be more specific, given the linked example, how would I change, say, the

relu<fc<84,

part at runtime to, say relu<fc<100,, remove it or add another one between any given two layers?


Answer:

You can't add layers at runtime. You can however edit their properties, like the fc layer has a set_num_outputs(). Each layer object has documentation you can look at to see what you can do with it. The docs for fc are here: http://dlib.net/dlib/dnn/layers_abstract.h.html#fc_

This example shows how to access layers. e.g. layer<7>(net).layer_details().set_num_outputs(123).

Question:

I'm using dlib with the neural network example. dlib neural network

I get the error:

error: no matching function for call to ‘dlib::mlp_kernel_c::mlp_kernel_c()

I am trying to initialize the

"mlp::kernel_1a_c net(2,5);"

variable inside a class, but I get all kinds of errors. Here is the piece of code relevant to my issue:

#ifndef MYCLASS_H
#define MYCLASS_H
#include <dlib/mlp.h>
typedef dlib::mlp::kernel_1a_c mlp_trainer_type;
Class MyClass:
 public:
  MyClassConstructure()
  {
     /// After declaration, how do I initialize my_neural_network_ here?
     // my_neural_network_(5, 5);  ????
  }
 private:
  /* this part fails, even without using the typedef. */
  mlp_trainer_type mouth_neural_network_;
 #end

Won't work even if I do:

private:
  mlp_trainer_type mouth_neural_network_(2,5);

Or if I declare and define it as a public variable. How do I solve this problem? I don't want a global variable.

I also use the SVM library from dlib, and that one works inside my class. It's just MLP that does not work.

The program worked just using dlib's SVM until I included that neural private variable today.


Answer:

The class you are trying to use mlp_kernel_c does not have a default constructor. You need to initialize the member variable before you enter the constructor body:

MyConstructor() : mouth_neural_network_(2,5)
{
     //Other stuff
}