## Hot questions for Using Neural networks in multidimensional array

Question:

I have an image represented as an array (img), and I'd like to make many copies of the image, and in each copy zero out different squares of the image (in the first copy zero out 0:2,0:2 in the next copy zero out 0:2, 3:5 etc). I've used np.broadcast_to to create multiple copies of the image, but I'm having trouble indexing through the multiple copies of the image, and the multiple locations within the images to zero out squares within the image.

I think I'm looking for something like skimage.util.view_as_blocks, but I need to be able to write to the original array, not just read.

The idea behind this is to pass all the copies of the image through a neural network. The copy that performs the worst should be the one with the class (picture) I am trying to identify in its zero'd out location.

img = np.arange(10*10).reshape(10,10) img_copies = np.broadcast_to(img, [100, 10, 10]) z = np.zeros(2*2).reshape(2,2)

Thanks

Answer:

I think I have cracked it! Here's an approach using `masking`

along a `6D`

reshaped array -

def block_masked_arrays(img, BSZ): # Store shape params m = img.shape[0]//BSZ n = m**2 # Make copies of input array such that we replicate array along first axis. # Reshape such that the block sizes are exposed by going higher dimensional. img3D = np.tile(img,(n,1,1)).reshape(m,m,m,BSZ,m,BSZ) # Create a square matrix with all ones except on diagonals. # Reshape and broadcast it to match the "blocky" reshaped input array. mask = np.eye(n,dtype=bool).reshape(m,m,m,1,m,1) # Use the mask to mask out the appropriate blocks. Reshape back to 3D. img3D[np.broadcast_to(mask, img3D.shape)] = 0 img3D.shape = (n,m*BSZ,-1) return img3D

Sample run -

In [339]: img Out[339]: array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) In [340]: block_masked_arrays(img, BSZ=2) Out[340]: array([[[ 0, 0, 2, 3], [ 0, 0, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]], [[ 0, 1, 0, 0], [ 4, 5, 0, 0], [ 8, 9, 10, 11], [12, 13, 14, 15]], [[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 0, 0, 10, 11], [ 0, 0, 14, 15]], [[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 0, 0], [12, 13, 0, 0]]])

Question:

I had to create a Matrix class, so I could make use of that in my Neural Network project. How could I achieve that when creating a Matrix object, it would work like it does with multidimensional array?

So basically I have a Matrix class which looks like this:

class Matrix { private int rows; private int size; private int columns; private double[,] _inMatrix; public double this[int row, int col] { get { return _inMatrix[row, col]; } set { _inMatrix[row, col] = value; } } public Matrix(int row, int col) { rows = row; columns = col; size = row * col; _inMatrix = new double[rows, columns]; } public Matrix() { } //and bunch of operations

It works like a charm when I know the rows and columns of the Matrix, but I would love to be able to set the values at the start or later. When I create a Matrix object, I do it this way:

Matrix m1=new Matrix(row, column)

What I want to do is to be able to set the values at the start like I would with arrays. I know that in C# this is how we initialize a multidimensional array:

double[,] 2Darray = new double[,] { { 1, 2 }, { 3, 4 }, { 5, 6 }, { 7, 8 } }; //or int[,] 2Darray; 2Darray = new int[,] { { 1, 2 }, { 3, 4 }, { 5, 6 }, { 7, 8 } };

How could I achieve something similar?

Answer:

Maybe something like this, to have `implicit operator`

to be able to do `Matrix m = new double[,] { {1,1}, {2,3}};`

. Also you don't need `_rows`

and `_columns`

as you could easily extract that from the underlaying multidimensional array(`GetLength(int dimention)`

).

class Matrix { private double[,] _inMatrix; public double this[int row, int col] { get => _inMatrix[row, col]; set => _inMatrix[row, col] = value; } public Matrix(double[,] a) => Initialize(a); //and bunch of operations private void Initialize(double[,] a) => _inMatrix = a; public static implicit operator Matrix(double[,] a) => new Matrix(a); }

Question:

Alright - assume I have two numpy arrays, shapes are:

(185, 100, 50, 3) (64, 100, 50, 3)

The values contained are 185 or 64 frames of video (for each frame, width is 100 pixels, height is 50, 3 channels, these are just images. The specifics of the images remain constant - the only value that changes is the number of frames per video) I need to get them both into a single array of some shape like

(2, n, 100, 50, 3)

Where both videos are contained (to run through a neural net as a batch)

I've already tried using np.stack - but I get

ValueError: all input arrays must have the same shape

Answer:

This is a quick brainstorm idea that I've got, along with strategy and Python code. Note: I was going to stick to just comment but to illustrate this idea I'd need to type in some codes. So here we go! (grab a coffee / a strong drink is recommended...)

##### Current State

- we have video 1
`vid1`

with 4D shape`(185, 100, 50, 3)`

- we have video 2
`vid2`

with 4D shape`(64, 100, 50, 3)`

- ... where the shape represents
`(frame ID, width, height, RGB channels)`

- we want to "stack" the two videos together as one numpy array with 5D shape
`(2, n, 100, 50, 3)`

. Note:`2`

because we are stacking 2 videos.`n`

is a hyperparameter that we can choose. We keep the video size the same (`100 width x 50 height x 3 RGB channels`

)

##### Opportunities

The first thing I see is that `vid1`

has roughly 3 times more frames than `vid2`

. What about we use `60`

as the common factor? i.e. let's set our hyperparameter `n`

to `60`

. (Note: some "frame cropping" / "frame throwing away" may be required - this will be covered below.)

##### Strategy

##### Phase 1 - Crop both videos (throw away some frames)

Let's crop both `vid1`

and `vid2`

to nice round numbers that are of multiple of `60`

(our `n`

- the hyperparameter). Concretely:

- crop
`vid1`

so that the shape becomes`(180, 100, 50, 3)`

. (i.e. we throw away the last 5 frames). We call this new cropped video`vid1_cropped`

. - crop
`vid2`

so that the shape becomes`(60, 100, 50, 3)`

. (i.e. we throw away the last 4 frames). We call this new cropped video`vid2_cropped`

.

##### Phase 2 - Make both videos 60 frames

`vid2_cropped`

is already at 60 frames, with shape`(60, 100, 50, 3)`

. So we leave this alone.`vid1_cropped`

however is at 180 frames. So I suggest we reduce this video to 60 frames, by averaging the RGB channel values in 3-frame batches - for all pixel positions (along width and height). What we will get at the end of this process, is a somewhat "diluted" (averaged) video with the same shape as`vid2_cropped`

-`(60, 100, 50, 3)`

. Let's called this diluted video`vid1_cropped_diluted`

.

##### Phase 3 - stack the two same-shape videos together

Now that both `vid2_cropped`

and `vid1_cropped_diluted`

are of the same 4D shape `(60, 100, 50, 3)`

. We may stack them together to obtain our final numpy array of 5D shape `(2, 60, 100, 50, 3)`

- let's call this `vids_combined`

.

We are done!

##### Demo

Turning the strategy into codes. I did this in Python 3.6 (with Jupyter Notebook / Jupyter Console).

Some notes:

I yet to validate the code (and revised as needed). In the meantime If you see any bugs please shout - I will be happy to update.

I have a gut feeling line 10 below on "diluting" (np.average step) might contain error. i.e. I mean to perform the 3-frame averaging only against the RGB channel values, for all pixel positions. I need to double check syntax. (In the meantime please kindly check line 10!)

this post illustrates concepts and some code implementation. Ideally I would have step through this in more depth, via much smaller video sizes so we may obtain better intuition / visualise each step, pixel by pixel. (I might come back to this when I have time). For now, I believe the numpy array shape analysis is sufficient to convey the idea across.

In [1]: import numpy as np In [2]: vid1 = np.random.random((185, 100, 50, 3)) In [3]: vid1.shape Out[3]: (185, 100, 50, 3) In [4]: vid2 = np.random.random((64, 100, 50, 3)) In [5]: vid2.shape Out[5]: (64, 100, 50, 3) In [6]: vid1_cropped = vid1[:180] In [7]: vid1_cropped.shape Out[7]: (180, 100, 50, 3) In [8]: vid2_cropped = vid2[:60] In [9]: vid2_cropped.shape Out[9]: (60, 100, 50, 3) In [10]: vid1_cropped_diluted = np.average(vid1_cropped.reshape(3,60,100,50,3), : axis=0) In [11]: vid1_cropped_diluted.shape Out[11]: (60, 100, 50, 3) In [12]: vids_combined = np.stack([vid1_cropped_diluted, vid2_cropped]) In [13]: vids_combined.shape Out[13]: (2, 60, 100, 50, 3)

Question:

I'm trying to make a jagged array for a neural network and this is giving me an out of bounds error...

int[] sizes = { layer1, layer2, layer3 }; int k = sizes.length - 1; double[][][] net = new double[k][][]; int i; for (i = 0; i < k; i++) net[i] = new double[sizes[i]][]; for (int j = 0; j < sizes[i]; j++) net[i][j] = new double[sizes[i + 1]];

The size of y in `net[x][ ][y]`

should be equal to the size of `net[x+1][y][ ]`

.

I did it on paper and I thought that this would work.

Answer:

int[] sizes = { layer1, layer2, layer3 }; int k = sizes.length - 1;

So, `k`

is equal to 2.

int i; for (i = 0; i < k; i++) net[i] = new double[sizes[i]][];

After that loop `i`

is equal to 2.

for (int j = 0; j < sizes[i]; j++) net[i][j] = new double[sizes[i + 1]]; ^^^^^^^^^^^^ ArrayIndexOutOfBoundsException

Boom, `sizes[i + 1]`

throws `ArrayIndexOutOfBoundsException`

, since `sizes`

has only indices 0, 1 and 2 and you are referring to `sizes[3]`

.

Question:

**EDIT:** Found a solution! Like the commenters suggested, using `memset`

is an insanely better approach. Replace the entire for loop with

`memset(lookup->n, -3, (dimensions*sizeof(signed char)));`

where

`long int dimensions = box1 * box2 * box3 * box4 * box5 * box6 * box7 * box8 * memvara * memvarb * memvarc * memvard * adirect * tdirect * fs * bs * outputnum;`

##### Intro

Right now, I'm looking at a beast of a for-loop:

for (j = 0;j < box1; j++) { for (k = 0; k < box2; k++) { for (l = 0; l < box3; l++) { for (m = 0; m < box4; m++) { for (x = 0;x < box5; x++) { for (y = 0; y < box6; y++) { for (xa = 0;xa < box7; xa++) { for (xb = 0; xb < box8; xb++) { for (nb = 0; nb < memvara; nb++) { for (na = 0; na < memvarb; na++) { for (nx = 0; nx < memvarc; nx++) { for (nx1 = 0; nx1 < memvard; nx1++) { for (naa = 0; naa < adirect; naa++) { for (nbb = 0; nbb < tdirect; nbb++) { for (ncc = 0; ncc < fs; ncc++) { for (ndd = 0; ndd < bs; ndd++) { for (o = 0; o < outputnum; o++) { lookup->n[j][k][l][m][x][y][xa][xb][nb][na][nx][nx1][naa][nbb][ncc][ndd][o] = -3; //set to default value } } } } } } } } } } } } } } } } }

##### The Problem

This loop is called every cycle in the main run to reset values to an initial state. Unfortunately, it is necessary for the structure of the program that this many values are kept in a single data structure.

Here's the kicker: for every 60 seconds of program run time, **57 seconds goes to this function alone**.

##### The Question

My question is this: would hash tables be an appropriate substitute for a linear array? This array has an O(n^17) cardinality, yet hash tables have an ideal of O(1).

- If so, what hash library would you recommend? This program is in C and has no native hash support.
- If not, what would you recommend instead?
- Can you provide some pseudo-code on how you think this should be implemented?

##### Notes

- OpenMP was used in an attempt to parallelize this loop. Numerous implementations only resulted in slightly-to-greatly increased run time.
- Memory usage is not particularly an issue -- this program is intended to be ran on an insanely high-spec'd computer.
- We are student researchers, thrust into a heretofore unknown world of optimization and parallelization -- please bear with us, and thank you for any help

Answer:

##### Hash vs Array

As comments have specified, an array should not be a problem here. Lookup into an array with a known offset is **O(1)**.

##### The Bottleneck

It seems to me that the bulk of the work here (and the reason it is slow) is the number of pointer de-references in the inner-loop.

To explain in a bit more detail, consider `myData[x][y][z]`

in the following code:

for (int x = 0; x < someVal1; x++) { for (int y = 0; y < someVal2; y++) { for (int z = 0; z < someVal3; z++) { myData[x][y][z] = -3; // x and y only change in outer-loops. } } }

To compute the location for the `-3`

, we do a lookup and add a value - once for `myData[x]`

, then again to get to `myData[x][y]`

, and once more finally for `myData[x][y][z]`

.

Since this lookup is in the inner-most portion of the loop, we have redundant reads. `myData[x]`

and `myData[x][y]`

are being recomputed, even when only `z`

's value is changing. The lookups were performed during a previous iteration, but the results weren't stored.

For *your* loop, there are **many** layers of lookups being computed each iteration, even when only the value of `o`

is changing in that inner-loop.

##### An Improvement for the Bottleneck

To make one lookup, per loop iteration, per loop level, simply store intermediate lookups. Using `int*`

as the indirection (though any type would work here), the sample code above (with `myData`

) would become:

int **a, *b; for (int x = 0; x < someVal1; x++) { a = myData[x]; // Store the lookup. for (int y = 0; y < someVal2; y++) { b = a[y]; // Indirection based on the stored lookup. for (int z = 0; z < someVal3; z++) { b[z] = -3; // This can be extrapolated as needed to deeper levels. } } }

This is just sample code, small adjustments may be necessary to get it to compile (casts and so forth). Note that there is probably no advantage to using this approach with a 3-dimensional array. However, for a 17-dimensional large data set with simple inner-loop operations (such as assignment), this approach should help quite a bit.

Finally, I'm assuming you aren't actually just assigning the value of `-3`

. You can use `memset`

to accomplish that goal much more efficiently.

Question:

I'm testing my neural network for XOR comparisons, and i've encountered an error i'd like to fix without altering the number of neurons in the first hidden layer. The code causing the error is:

public double dotProduct(int[][] a, double[][] ds) { int i; double sum = 0; for(i = 0; i < a.length; i++) { int j; for(j = 0; j < a[i].length; j++) { sum += a[i][j] * ds[i][j]; } } return sum; }

is giving me a null pointer exception. The dot product calculation itself is used to generate the dot product from an inputset my neural net has been provided with.

The input set is this:

int inputSets[][] = { {0, 0, 1}, {0, 1, 1}, {1, 0, 1}, {0, 1, 0}, {1, 0, 0}, {1, 1, 1}, {0, 0, 0} };

It's a multidimensional array containing 7 arrays. It is then used in this:

public double think(int[][] input) { output_from_layer1 = sigmoid(dotProd.dotProduct(input, layer1.getWeights())); return output_from_layer1; }

The sigmoid part of the function isn't an issue, as it takes a double and dotProduct is supposed to output a double. The issue as far as I'm aware is that the dotProduct function is taking a larger multidimensional array, and then attempting to cross it with a smaller one (The layer1.getWeights getter that calls the weights array for that layer).

The weights of a layer are defined as such:

layerWeights = new double[numNeurons][inpNum];

and the layer that's being used in the dot product is:

XORlayer layer1 = new XORlayer(4, 3);

So 4 neurons with 3 inputs each. The issue stems from the fact that there aren't enough neurons in this layer for the amount of inputs, as far as i'm aware, which is generating the null pointer exception when there isn't anything further to multiply against the input values.

We have 12 inputs in the neurons, and 21 input values.

My main question is, is there a way to solve this issue so the dot product operation is completed successfully without simply expanding the amount of neurons the layer contains to 7?

Answer:

This discussion might help. As suggested there, since you're using a 2D array, matrix multiplication (instead of dot product) would likely be more appropriate.

Of course, similar to the dot product, the dimensions must be aligned for matrix multiplication.

`inputSets`

is a 7x3 matrix and `layerWeights`

is a 4x3 matrix. The transpose of `layerWeights`

is a 3x4 matrix. Now the dimensions are aligned, and the matrix multiplication results in a 7x4 matrix.

Based on the posted code, I would suggest something like this:

output_from_layer1 = sigmoid(matrixMult.multiply(input, transpose(layer1.getWeights())));