Hot questions for Using Neural networks in openvino

Question:

I have been looking for a way to get the tensor of weights/parameters and biases for each layer of the network using the C++ API on the OpenVINO framework. I can't find anything on the documentation nor any example on the samples. How could I extract these tensors?

Thanks, C├ęsar.

EDIT: Code for getting weights and biases separately:

for (auto&& layer : this->pImplementation->network) {
        weightsbuf << "Layer name: " << layer->name << std::endl;
        weightsbuf << "Parameters:" << std::endl;

        for (auto&& param : layer->params) {

            weightsbuf << '\t' << param.first << ": " << param.second << std::endl;
        }

        std::vector<int> kernelvect;
        auto kernelsize = layer->params.at("kernel");

        std::stringstream ss(kernelsize);

        // split by comma kernel size
        for (int i; ss >> i;) {
            kernelvect.push_back(i);
            if (ss.peek() == ',')
                ss.ignore();
        }
        int noutputs = std::stoi(layer->params.at("output"));
        int nweights = kernelvect[0] * kernelvect[1] * noutputs;
        int nbias = noutputs;

        for (auto&& blob : layer->blobs) {
            weightsbuf << '\t' << blob.first << ": ";
            for (size_t w = 0; w < nweights; ++w) {
                weightsbuf << blob.second->buffer().as<float*>()[w] << " ";
            }
            weightsbuf << std::endl;
            weightsbuf << '\t' << "biases:";
            for (size_t b = 0; b < nbias; ++b) {
                weightsbuf << blob.second->buffer().as<float*>()[nweights + b] << " ";
            }
        }
        weightsbuf << std::endl;
    }

Answer:

Looks like there is no official example to show that functionality. I haven't found anything like that as well.

I implemented a basic sample which prints information about each layer of a network. Please take a look: https://github.com/ArtemSkrebkov/dldt/blob/askrebko/iterate-through-network/inference-engine/samples/cnn_network_parser/main.cpp

I believe the idea how to use API is clear.

The sample is based on the current state of the dldt repo (branch '2019', it corresponds to the release 2019 R3.1)

Another link, which might be useful, is the documentation on CNNLayer class: https://docs.openvinotoolkit.org/latest/classInferenceEngine_1_1CNNLayer.html

Question:

I'm using the openvino toolkit in python for head position estimation. I load the network as follows:

weights_headpose = 'head-pose-estimation-adas-0001-2018-FP32.bin'
config_headpose = 'head-pose-estimation-adas-0001-2018-FP32.xml'
model_headpose = cv.dnn.readNet(weights_headpose, config_headpose)

The following

print(model_headpose.getLayerNames())

gives:

['angle_p_fc', 'angle_r_fc', 'angle_y_fc']

When I run:

>print(model_headpose.forward('angle_y_fc'))

I get a float, as expected; BUT when i run

print(model_headpose.forward('angle_p_fc'))

or

print(model_headpose.forward('angle_r_fc'))

I get the following error:

cv2.error: OpenCV(4.1.0-openvino) C:\jenkins\workspace\OpenCV\OpenVINO\build\opencv\modules\dnn\src\op_inf_engine.cpp:688: error: (-215:Assertion failed) !isInitialized() in function 'cv::dnn::InfEngineBackendNet::initPlugin'

Are these layers not initialized? Can someone please help me? Thanks in advance!


Answer:

My question was solved by using model_headpose.forward(['angle_p_fc', 'angle_r_fc', 'angle_y_fc'])