## Auto Detection of Rotation Angle on Arbitrary Image with Orthogonal Features

I have a task at hand where I need to detect angle of an image like the following sample (part of microchip photograph). The image does contain orthogonal features, but they could have different size, with different resolution/sharpness. Image will be slightly imperfect due to some optical distortion and aberrations. Sub-pixel angle detection accuracy is required (i.e. it should be well under <0.1° error, something like 0.01° would be tolerable). For reference, for this image optimal angle is around 32.19°.

Currently I've tried 2 approaches: Both do a brute-force search for a local minimum with 2° step, then gradient descend down to 0.0001° step size.

- Merit function is
`sum(pow(img(x+1)-img(x-1), 2) + pow(img(y+1)-img(y-1))`

calculated across the image. When horizontal/vertical lines are aligned - there is less change in horizontal/vertical directions. Precision was about 0.2°. - Merit function is (max-min) over some stripe width/height of the image. This stripe is also looped across the image, and merit function is accumulated. This approach also focuses on smaller change of brightness when horizontal/vertical lines are aligned, but it can detect smaller changes over larger base (stripe width - which could be around 100 pixels wide). This gives better precision, up to 0.01° - but has a lot of parameters to tweak (stripe width/height for example is quite sensitive) which could be unreliable in real world.

Edge detection filter did not helped much.

My concern is very small change in merit function in both cases between worst and best angles (<2x difference).

Do you have any better suggestions on writing merit function for angle detection?

**Update:** Full size sample image is uploaded here (51 MiB)

After all processing it will end up looking like this.

If I understand your method 1 correctly, with it, if you used a circularly symmetrical region and did the rotation about the center of the region, you would eliminate the region's dependency on the rotation angle and get a more fair comparison by the merit function between different rotation angles. I will suggest a method that is essentially equivalent to that, but uses the full image and does not require repeated image rotation, and will include low-pass filtering to remove pixel grid anisotropy and for denoising.

##### Gradient of isotropically low-pass filtered image

First, let's calculate a local gradient vector at each pixel for the green color channel in the full size sample image.

I derived horizontal and vertical differentiation kernels by differentiating the continuous-space impulse response of an ideal low-pass filter with a flat circular frequency response that removes the effect of the choice of image axes by ensuring that there is no different level of detail diagonally compared to horizontally or vertically, by sampling the resulting function, and by applying a rotated cosine window:

$$\begin{gather}h_x[x, y] = \begin{cases}0&\text{if }x = y = 0,\\-\displaystyle\frac{\omega_c^2\,x\,J_2\left(\omega_c\sqrt{x^2 + y^2}\right)}{2 \pi\,(x^2 + y^2)}&\text{otherwise,}\end{cases}\\ h_y[x, y] = \begin{cases}0&\text{if }x = y = 0,\\-\displaystyle\frac{\omega_c^2\,y\,J_2\left(\omega_c\sqrt{x^2 + y^2}\right)}{2 \pi\,(x^2 + y^2)}&\text{otherwise,}\end{cases}\end{gather}\tag{1}$$

where $J_2$ is a 2nd order Bessel function of the first kind, and $\omega_c$ is the cut-off frequency in radians. Python source (does not have the minus signs of Eq. 1):

import matplotlib.pyplot as plt import scipy import scipy.special import numpy as np def rotatedCosineWindow(N): # N = horizontal size of the targeted kernel, also its vertical size, must be odd. return np.fromfunction(lambda y, x: np.maximum(np.cos(np.pi/2*np.sqrt(((x - (N - 1)/2)/((N - 1)/2 + 1))**2 + ((y - (N - 1)/2)/((N - 1)/2 + 1))**2)), 0), [N, N]) def circularLowpassKernelX(omega_c, N): # omega = cutoff frequency in radians (pi is max), N = horizontal size of the kernel, also its vertical size, must be odd. kernel = np.fromfunction(lambda y, x: omega_c**2*(x - (N - 1)/2)*scipy.special.jv(2, omega_c*np.sqrt((x - (N - 1)/2)**2 + (y - (N - 1)/2)**2))/(2*np.pi*((x - (N - 1)/2)**2 + (y - (N - 1)/2)**2)), [N, N]) kernel[(N - 1)//2, (N - 1)//2] = 0 return kernel def circularLowpassKernelY(omega_c, N): # omega = cutoff frequency in radians (pi is max), N = horizontal size of the kernel, also its vertical size, must be odd. kernel = np.fromfunction(lambda y, x: omega_c**2*(y - (N - 1)/2)*scipy.special.jv(2, omega_c*np.sqrt((x - (N - 1)/2)**2 + (y - (N - 1)/2)**2))/(2*np.pi*((x - (N - 1)/2)**2 + (y - (N - 1)/2)**2)), [N, N]) kernel[(N - 1)//2, (N - 1)//2] = 0 return kernel N = 41 # Horizontal size of the kernel, also its vertical size. Must be odd. window = rotatedCosineWindow(N) # Optional window function plot #plt.imshow(window, vmin=-np.max(window), vmax=np.max(window), cmap='bwr') #plt.colorbar() #plt.show() omega_c = np.pi/4 # Cutoff frequency in radians <= pi kernelX = circularLowpassKernelX(omega_c, N)*window kernelY = circularLowpassKernelY(omega_c, N)*window # Optional kernel plot #plt.imshow(kernelX, vmin=-np.max(kernelX), vmax=np.max(kernelX), cmap='bwr') #plt.colorbar() #plt.show()

*Figure 1. 2-d rotated cosine window.*

*Figure 2. Windowed horizontal isotropic-low-pass differentiation kernels, for different cut-off frequency $\omega_c$ settings. Top: omega_c = np.pi, middle: omega_c = np.pi/4, bottom: omega_c = np.pi/16. The minus sign of Eq. 1 was left out. Vertical kernels look the same but have been rotated 90 degrees. A weighted sum of the horizontal and the vertical kernels, with weights $\cos(\phi)$ and $\sin(\phi)$, respectively, gives an analysis kernel of the same type for gradient angle $\phi$.*

Differentiation of the impulse response does not affect the bandwidth, as can be seen by its 2-d fast Fourier transform (FFT), in Python:

# Optional FFT plot absF = np.abs(np.fft.fftshift(np.fft.fft2(circularLowpassKernelX(np.pi, N)*window))) plt.imshow(absF, vmin=0, vmax=np.max(absF), cmap='Greys', extent=[-np.pi, np.pi, -np.pi, np.pi]) plt.colorbar() plt.show()

*Figure 3. Magnitude of the 2-d FFT of $h_x$. In frequency domain, differentiation appears as multiplication of the flat circular pass band by $\omega_x$, and by a 90 degree phase shift which is not visible in the magnitude.*

To do the convolution for the green channel and to collect a 2-d gradient vector histogram, for visual inspection, in Python:

import scipy.ndimage img = plt.imread('sample.tif').astype(float) X = scipy.ndimage.convolve(img[:,:,1], kernelX)[(N - 1)//2:-(N - 1)//2, (N - 1)//2:-(N - 1)//2] # Green channel only Y = scipy.ndimage.convolve(img[:,:,1], kernelY)[(N - 1)//2:-(N - 1)//2, (N - 1)//2:-(N - 1)//2] # ... # Optional 2-d histogram #hist2d, xEdges, yEdges = np.histogram2d(X.flatten(), Y.flatten(), bins=199) #plt.imshow(hist2d**(1/2.2), vmin=0, cmap='Greys') #plt.show() #plt.imsave('hist2d.png', plt.cm.Greys(plt.Normalize(vmin=0, vmax=hist2d.max()**(1/2.2))(hist2d**(1/2.2)))) # To save the histogram image #plt.imsave('histkey.png', plt.cm.Greys(np.repeat([(np.arange(200)/199)**(1/2.2)], 16, 0)))

This also crops the data, discarding `(N - 1)//2`

pixels from each edge that were contaminated by the rectangular image boundary, before the histogram analysis.

$\pi$
$\frac{\pi}{2}$
$\frac{\pi}{4}$
$\frac{\pi}{8}$
$\frac{\pi}{16}$
$\frac{\pi}{32}$
$\frac{\pi}{64}$
—$0$*Figure 4. 2-d histograms of gradient vectors, for different low-pass filter cutoff frequency $\omega_c$ settings. In order: first with N=41: omega_c = np.pi, omega_c = np.pi/2, omega_c = np.pi/4 (same as in the Python listing), omega_c = np.pi/8, omega_c = np.pi/16, then: N=81: omega_c = np.pi/32, N=161: omega_c = np.pi/64 . Denoising by low-pass filtering sharpens the circuit trace edge gradient orientations in the histogram.*

##### Vector length weighted circular mean direction

There is the Yamartino method of finding the "average" wind direction from multiple wind vector samples in one pass through the samples. It is based on the mean of circular quantities, which is calculated as the shift of a cosine that is a sum of cosines each shifted by a circular quantity of period $2\pi$. We can use a vector length weighted version of the same method, but first we need to bunch together all the directions that are equal modulo $\pi/2$. We can do this by multiplying the angle of each gradient vector $[X_k,Y_k]$ by 4, using a complex number representation:

$$Z_k = \frac{(X_k + Y_k i)^4}{\sqrt{X_k^2 + Y_k^2}^3} = \frac{X_k^4 - 6X_k^2Y_k^2 + Y_k^4 + (4X_k^3Y_k - 4X_kY_k^3)i}{\sqrt{X_k^2 + Y_k^2}^3},\tag{2}$$

satisfying $|Z_k| = \sqrt{X_k^2 + Y_k^2}$ and by later interpreting that the phases of $Z_k$ from $-\pi$ to $\pi$ represent angles from $-\pi/4$ to $\pi/4$, by dividing the calculated circular mean phase by 4:

$$\phi = \frac{1}{4}\operatorname{atan2}\left(\sum_k\operatorname{Im}(Z_k), \sum_k\operatorname{Re}(Z_k)\right)\tag{3}$$

where $\phi$ is the estimated image orientation.

The quality of the estimate can be assessed by doing another pass through the data and by calculating the mean weighted square circular distance, $\text{MSCD}$, between phases of the complex numbers $Z_k$ and the estimated circular mean phase $4\phi$, with $|Z_k|$ as the weight:

$$\begin{gather}\text{MSCD} = \frac{\sum_k|Z_k|\bigg(1 - \cos\Big(4\phi - \operatorname{atan2}\big(\operatorname{Im}(Z_k), \operatorname{Re}(Z_k)\big)\Big)\bigg)}{\sum_k|Z_k|}\\ = \frac{\sum_k\frac{|Z_k|}{2}\left(\left(\cos(4\phi) - \frac{\operatorname{Re}(Z_k)}{|Z_k|}\right)^2 + \left(\sin(4\phi) - \frac{\operatorname{Im}(Z_k)}{|Z_k|}\right)^2\right)}{\sum_k|Z_k|}\\ = \frac{\sum_k\big(|Z_k| - \operatorname{Re}(Z_k)\cos(4\phi) - \operatorname{Im}(Z_k)\sin(4\phi)\big)}{\sum_k|Z_k|},\end{gather}\tag{4}$$

which was minimized by $\phi$ calculated per Eq. 3. In Python:

absZ = np.sqrt(X**2 + Y**2) reZ = (X**4 - 6*X**2*Y**2 + Y**4)/absZ**3 imZ = (4*X**3*Y - 4*X*Y**3)/absZ**3 phi = np.arctan2(np.sum(imZ), np.sum(reZ))/4 sumWeighted = np.sum(absZ - reZ*np.cos(4*phi) - imZ*np.sin(4*phi)) sumAbsZ = np.sum(absZ) mscd = sumWeighted/sumAbsZ print("rotate", -phi*180/np.pi, "deg, RMSCD =", np.arccos(1 - mscd)/4*180/np.pi, "deg equivalent (weight = length)")

Based on my `mpmath`

experiments (not shown), I think we won't run out of numerical precison even for very large images. For different filter settings (annotated) the outputs are, as reported between -45 and 45 degrees:

rotate 32.29809399495655 deg, RMSCD = 17.057059965741338 deg equivalent (omega_c = np.pi) rotate 32.07672617150525 deg, RMSCD = 16.699056648843566 deg equivalent (omega_c = np.pi/2) rotate 32.13115293914797 deg, RMSCD = 15.217534399922902 deg equivalent (omega_c = np.pi/4, same as in the Python listing) rotate 32.18444156018288 deg, RMSCD = 14.239347706786056 deg equivalent (omega_c = np.pi/8) rotate 32.23705383489169 deg, RMSCD = 13.63694582160468 deg equivalent (omega_c = np.pi/16)

Strong low-pass filtering appear useful, reducing the root mean square circular distance (RMSCD) equivalent angle calculated as $\operatorname{acos}(1 - \text{MSCD})$. Without the 2-d rotated cosine window, some of the results would be off by a degree or so (not shown), which means that it is important to do proper windowing of the analysis filters. The RMSCD equivalent angle is not directly an estimate of the error in the angle estimate, which should be much less.

##### Alternative square-length weight function

Let's try square of the vector length as an alternative weight function, by:

$$Z_k = \frac{(X_k + Y_k i)^4}{\sqrt{X_k^2 + Y_k^2}^2} = \frac{X_k^4 - 6X_k^2Y_k^2 + Y_k^4 + (4X_k^3Y_k - 4X_kY_k^3)i}{X_k^2 + Y_k^2},\tag{5}$$

In Python:

absZ_alt = X**2 + Y**2 reZ_alt = (X**4 - 6*X**2*Y**2 + Y**4)/absZ_alt imZ_alt = (4*X**3*Y - 4*X*Y**3)/absZ_alt phi_alt = np.arctan2(np.sum(imZ_alt), np.sum(reZ_alt))/4 sumWeighted_alt = np.sum(absZ_alt - reZ_alt*np.cos(4*phi_alt) - imZ_alt*np.sin(4*phi_alt)) sumAbsZ_alt = np.sum(absZ_alt) mscd_alt = sumWeighted_alt/sumAbsZ_alt print("rotate", -phi_alt*180/np.pi, "deg, RMSCD =", np.arccos(1 - mscd_alt)/4*180/np.pi, "deg equivalent (weight = length^2)")

The square length weight reduces the RMSCD equivalent angle by about a degree:

rotate 32.264713568426764 deg, RMSCD = 16.06582418749094 deg equivalent (weight = length^2, omega_c = np.pi, N = 41) rotate 32.03693157762725 deg, RMSCD = 15.839593856962486 deg equivalent (weight = length^2, omega_c = np.pi/2, N = 41) rotate 32.11471435914187 deg, RMSCD = 14.315371970649874 deg equivalent (weight = length^2, omega_c = np.pi/4, N = 41) rotate 32.16968341455537 deg, RMSCD = 13.624896827482049 deg equivalent (weight = length^2, omega_c = np.pi/8, N = 41) rotate 32.22062839958777 deg, RMSCD = 12.495324176281466 deg equivalent (weight = length^2, omega_c = np.pi/16, N = 41) rotate 32.22385477783647 deg, RMSCD = 13.629915935941973 deg equivalent (weight = length^2, omega_c = np.pi/32, N = 81) rotate 32.284350817263906 deg, RMSCD = 12.308297934977746 deg equivalent (weight = length^2, omega_c = np.pi/64, N = 161)

This seems a slightly better weight function. I added also cutoffs $\omega_c = \pi/32$ and $\omega_c = \pi/64$. They use larger `N`

resulting in a different cropping of the image and not strictly comparable MSCD values.

##### 1-d histogram

The benefit of the square-length weight function is more apparent with a 1-d weighted histogram of $Z_k$ phases. Python script:

# Optional histogram hist_plain, bin_edges = np.histogram(np.arctan2(imZ, reZ), weights=np.ones(absZ.shape)/absZ.size, bins=900) hist, bin_edges = np.histogram(np.arctan2(imZ, reZ), weights=absZ/np.sum(absZ), bins=900) hist_alt, bin_edges = np.histogram(np.arctan2(imZ_alt, reZ_alt), weights=absZ_alt/np.sum(absZ_alt), bins=900) plt.plot((bin_edges[:-1]+(bin_edges[1]-bin_edges[0]))*45/np.pi, hist_plain, "black") plt.plot((bin_edges[:-1]+(bin_edges[1]-bin_edges[0]))*45/np.pi, hist, "red") plt.plot((bin_edges[:-1]+(bin_edges[1]-bin_edges[0]))*45/np.pi, hist_alt, "blue") plt.xlabel("angle (degrees)") plt.show()

*Figure 5. Linearly interpolated weighted histogram of gradient vector angles, wrapped to $-\pi/4\ldots\pi/4$ and weighted by (in order from bottom to top at the peak): no weighting (black), gradient vector length (red), square of gradient vector length (blue). The bin width is 0.1 degrees. Filter cutoff was omega_c = np.pi/4, same as in the Python listing. The bottom figure is zoomed at the peaks.*

##### Steerable filter math

We have seen that the approach works, but it would be good to have a better mathematical understanding. The $x$ and $y$ differentiation filter impulse responses given by Eq. 1 can be understood as the basis functions for forming the impulse response of a *steerable* differentiation filter that is sampled from a rotation of the right side of the equation for $h_x[x, y]$ (Eq. 1). This is more easily seen by converting Eq. 1 to polar coordinates:

$$\begin{align}h_x(r, \theta) = h_x[r\cos(\theta), r\sin(\theta)] &= \begin{cases}0&\text{if }r = 0,\\-\displaystyle\frac{\omega_c^2\,r\cos(\theta)\,J_2\left(\omega_c r\right)}{2 \pi\,r^2}&\text{otherwise}\end{cases}\\ &= \cos(\theta)f(r),\\ h_y(r, \theta) = h_y[r\cos(\theta), r\sin(\theta)] &= \begin{cases}0&\text{if }r = 0,\\-\displaystyle\frac{\omega_c^2\,r\sin(\theta)\,J_2\left(\omega_c r\right)}{2 \pi\,r^2}&\text{otherwise}\end{cases}\\ &= \sin(\theta)f(r),\\ f(r) &= \begin{cases}0&\text{if }r = 0,\\-\displaystyle\frac{\omega_c^2\,r\,J_2\left(\omega_c r\right)}{2 \pi\,r^2}&\text{otherwise,}\end{cases}\end{align}\tag{6}$$

where both the horizontal and the vertical differentiation filter impulse responses have the same radial factor function $f(r)$. Any rotated version $h(r, \theta, \phi)$ of $h_x(r, \theta)$ by steering angle $\phi$ is obtained by:

$$h(r, \theta, \phi) = h_x(r, \theta - \phi) = \cos(\theta - \phi)f(r)\tag{7}$$

The idea was that the steered kernel $h(r, \theta, \phi)$ can be constructed as a weighted sum of $h_x(r, \theta)$ and $h_x(r, \theta)$, with $\cos(\phi)$ and $\sin(\phi)$ as the weights, and that is indeed the case:

$$\cos(\phi) h_x(r, \theta) + \sin(\phi) h_y(r, \theta) = \cos(\phi) \cos(\theta) f(r) + \sin(\phi) \sin(\theta) f(r) = \cos(\theta - \phi) f(r) = h(r, \theta, \phi).\tag{8}$$

We will arrive at an equivalent conclusion if we think of the isotropically low-pass filtered signal as the input signal and construct a partial derivative operator with respect to the first of rotated coordinates $x_\phi$, $y_\phi$ rotated by angle $\phi$ from coordinates $x$, $y$. (Derivation can be considered a linear-time-invariant system.) We have:

$$\begin{gather}x = \cos(\phi)x_\phi - \sin(\phi)y_\phi,\\ y = \sin(\phi)x_\phi + \cos(\phi)y_\phi\end{gather}\tag{9}$$

Using the chain rule for partial derivatives, the partial derivative operator with respect to $x_\phi$ can be expressed as a cosine and sine weighted sum of partial derivatives with respect to $x$ and $y$:

$$\begin{gather}\frac{\partial}{\partial x_\phi} = \frac{\partial x}{\partial x_\phi}\frac{\partial}{\partial x} + \frac{\partial y}{\partial x_\phi}\frac{\partial}{\partial y} = \frac{\partial \big(\cos(\phi)x_\phi - \sin(\phi)y_\phi\big)}{\partial x_\phi}\frac{\partial}{\partial x} + \frac{\partial \big(\sin(\phi)x_\phi + \cos(\phi)y_\phi\big)}{\partial x_\phi}\frac{\partial}{\partial y} = \cos(\phi)\frac{\partial}{\partial x} + \sin(\phi)\frac{\partial}{\partial y}\end{gather}\tag{10}$$

A question that remains to be explored is how a suitably weighted circular mean of gradient vector angles is related to the angle $\phi$ of in some way the "most activated" steered differentiation filter.

##### Possible improvements

To possibly improve results further, the gradient can be calculated also for the red and blue color channels, to be included as additional data in the "average" calculation.

I have in mind possible extensions of this method:

1) Use a larger set of analysis filter kernels and detect edges rather than detecting gradients. This needs to be carefully crafted so that edges in all directions are treated equally, that is, an edge detector for any angle should be obtainable by a weighted sum of orthogonal kernels. A set of suitable kernels can (I think) be obtained by applying the differential operators of Eq. 11, Fig. 6 (see also my Mathematics Stack Exchange post) on the continuous-space impulse response of a circularly symmetric low-pass filter.

$$\begin{gather}\lim_{h\to 0}\frac{\sum_{N=0}^{4N + 1} (-1)^n f\bigg(x + h\cos\left(\frac{2\pi n}{4N + 2}\right), y + h\sin\left(\frac{2\pi n}{4N + 2}\right)\bigg)}{h^{2N + 1}},\\ \lim_{h\to 0}\frac{\sum_{N=0}^{4N + 1} (-1)^n f\bigg(x + h\sin\left(\frac{2\pi n}{4N + 2}\right), y + h\cos\left(\frac{2\pi n}{4N + 2}\right)\bigg)}{h^{2N + 1}}\end{gather}\tag{11}$$

*Figure 6. Dirac delta relative locations in differential operators for construction of higher-order edge detectors.*

2) The calculation of a (weighted) mean of circular quantities can be understood as summing of cosines of the same frequency shifted by samples of the quantity (and scaled by the weight), and finding the peak of the resulting function. If similarly shifted and scaled harmonics of the shifted cosine, with carefully chosen relative amplitudes, are added to the mix, forming a sharper smoothing kernel, then multiple peaks may appear in the total sum and the peak with the largest value can be reported. With a suitable mixture of harmonics, that would give a kind of local average that largely ignores outliers away from the main peak of the distribution.

##### Alternative approaches

It would also be possible to convolve the image by angle $\phi$ and angle $\phi + \pi/2$ rotated "long edge" kernels, and to calculate the mean square of the pixels of the two convolved images. The angle $\phi$ that maximizes the mean square would be reported. This approach might give a good final refinement for the image orientation finding, because it is risky to search the complete angle $\phi$ space at large steps.

Another approach is non-local methods, like cross-correlating distant similar regions, applicable if you know that there are long horizontal or vertical traces, or features that repeat many times horizontally or vertically.

**Detect Image Rotation Angle,** Then in function crop_rect(), we calculate a rotation matrix and rotate the for semi-automatically detecting the rotation angle of speckle pattern from image data Because , the rotation angle can be inferred from (18) via examination of arbitrary -th Maths - Rotation Matrices Rotations can be represented by orthogonal� You can experiment by varying the scale and rotation of the input image. However, note that there is a limit to the amount you can vary the scale before the feature detector fails to find enough features. Step 3: Find Matching Features Between Images. Detect features in both images.

There is a similar DSP trick here, but I don't remember the details exactly.

I read about it somewhere, some while ago. It has to do with figuring out fabric pattern matches regardless of the orientation. So you may want to research on that.

Grab a circle sample. Do sums along spokes of the circle to get a circumference profile. Then they did a DFT on that (it is inherently circular after all). Toss the phase information (make it orientation independent) and make a comparison.

Then they could tell whether two fabrics had the same pattern.

Your problem is similar.

It seems to me, without trying it first, that the characteristics of the pre DFT profile should reveal the orientation. Doing standard deviations along the spokes instead of sums should work better, maybe both.

Now, if you had an oriented reference image, you could use their technique.

Ced

Your precision requirements are rather strict.

I gave this a whack. Taking the sum of the absolute values of the differences between two subsequent points along the spoke for each color.

Here is a graph of around the circumference. Your value is plotted with the white markers.

You can sort of see it, but I don't think this is going to work for you. Sorry.

Progress Report: Some

I've decided on a three step process.

1) Find evaluation spot.

2) Coarse Measurement

3) Fine Measurement

Currently, the first step is user intevention. It should be automatible, but I'm not bothering. I have a rough draft of the second step. There's some tweaking I want to try. Finally, I have a few candidates for the third step that is going to take testing to see which works best.

The good news is it is lighting fast. If your only purposed is to make an image look level on a web page, then your tolerances are way too strict and the coarse measurement ought to be accurate enough.

This is the coarse measurement. Each pixel is about 0.6 degrees. (Edit, actually 0.3)

Progress Report: Able to get good results

Most aren't this good, but they are cheap (and fairly local) and finding spots to get good reads is easy..... for a human. Brute force should work fine for a program.

The results can be much improved on, this is a simple baseline test. I'm not ready to do any explaining yet, nor post the code, but this screen shot ain't photoshopped.

Progress Report: The code is posted, I'm done with this for a while.

This screenshot is the program working on Marcus' 45 degree shot.

The color channels are processed independently.

A point is selected as the sweep center.

A diameter is swept through 180 degrees at discrete angles

At each angle, "volatility" is measuring across the diameter. A trace is made for each channel gathering samples. The sample value is a linear interpolation of the four corner values of whichever grid square the sample spot lands on.

For each channel trace

The samples are multiplied by a VonHann window function

A Smooth/Differ pass is made on the samples

The RMS of the Differ is used as a volatility measure

The lower row graphs are:

First is the sweep of 0 to 180 degrees, each pixel is 0.5 degrees. Second is the sweep around the selected angle, each pixel is 0.1 degrees. Third is the sweep around the selected angle, each pixel is 0.01 degrees. Fourth is the trace Differ curve

The initial selection is the minimal average volatility of the three channels. This will be close, but usually not on, the best angle. The symmetry at the trough is a better indicator than the minimum. A best fit parabola in that neighborhood should yield a very good answer.

The source code (in Gambas, PPA gambas-team/gambas3) can be found at:

https://forum.gambas.one/viewtopic.php?f=4&t=707

It is an ordinary zip file, so you don't have to install Gambas to look at the source. The files are in the ".src" subdirectory.

Removing the VonHann window yields higher accuracy because it effectively lengthens the trace, but adds wobbles. Perhaps a double VonHann would be better as the center is unimportant and a quicker onset of "when the teeter-totter hits the ground" will be detected. Accuracy can easily be improved my increasing the trace length as far as the image allows (Yes, that's automatible). A better window function, sinc?

The measures I have taken at the current settings confirm the 3.19 value +/-.03 ish.

This is just the measuring tool. There are several strategies I can think of to apply it to the image. That, as they say, is an exercise for the reader. Or in this case, the OP. I'll be trying my own later.

There's head room for improvement in both the algorithm and the program, but already they are really useful.

Here is how the linear interpolation works

'---- Whole Number Portion x = Floor(rx) y = Floor(ry) '---- Fractional Portions fx = rx - x fy = ry - y gx = 1.0 - fx gy = 1.0 - fy '---- Weighted Average vtl = ArgValues[x, y] * gx * gy ' Top Left vtr = ArgValues[x + 1, y] * fx * gy ' Top Right vbl = ArgValues[x, y + 1] * gx * fy ' Bottom Left vbr = ArgValues[x + 1, y + 1] * fx * fy ' Bottom Rigth v = vtl + vtr + vbl + vbr

Anybody know the conventional name for that?

**Detect Image Rotation Angle Python,** The Hough Transform is a popular feature extraction technique to detect any shape within an. to represent rotation // by this angle. from the Adjust menu simply choose ' Auto detect The example Rotate images by an arbitrary angle in C# explains how you can rotate Rotation matrices are orthogonal as explained here. (a) Horizontal detection. (b) Rotation detection. Fig. 1. Small, cluttered and rotated objects in complex scene whereby rotation detection plays an important role. Red boxes indicate missing detection which are suppressed by non-maximum suppression (NMS). 3) Arbitrary orientations. Objects in aerial images can appear in various orientations.

This is a go at the first suggested extension of my previous answer.

##### Ideal circularly symmetric band-limiting filters

We construct an orthogonal bank of four filters bandlimited to inside a circle of radius $\omega_c$ on the frequency plane. The impulse responses of these filters can be linearly combined to form directional edge detection kernels. An arbitrarily normalized set of orthogonal filter impulse responses are obtained by applying the first two pairs of "beach-ball like" differential operators to the continuous-space impulse response of the circularly symmetric ideal band-limiting filter impulse response $h(x,y)$:

$$h(x,y) = \frac{\omega_c}{2\pi \sqrt{x^2 + y^2} } J_1 \big( \omega_c \sqrt{x^2 + y^2} \big)\tag{1}$$

$$\begin{align}h_{0x}(x, y) &\propto \frac{d}{dx}h(x, y),\\ h_{0y}(x, y) &\propto \frac{d}{dy}h(x, y),\\ h_{1x}(x, y) &\propto \left(\left(\frac{d}{dx}\right)^3-3\frac{d}{dx}\left(\frac{d}{dy}\right)^2\right)h(x, y),\\ h_{1y}(x, y) &\propto \left(\left(\frac{d}{dy}\right)^3-3\frac{d}{dy}\left(\frac{d}{dx}\right)^2\right)h(x, y)\end{align}\tag{2}$$

$$\begin{align}h_{0x}(x, y) &= \begin{cases}0&\text{if }x = y = 0,\\-\displaystyle\frac{\omega_c^2\,x\,J_2\left(\omega_c\sqrt{x^2 + y^2}\right)}{2 \pi\,(x^2 + y^2)}&\text{otherwise,}\end{cases}\\ h_{0y}(x, y) &= h_{0x}[y, x],\\ h_{1x}(x, y) &= \begin{cases}0&\text{if }x = y = 0,\\\frac{\begin{array}{l}\Big(ω_cx(3y^2 - x^2)\big(J_0\left(ω_c\sqrt{x^2 + y^2}\right)ω_c\sqrt{x^2 + y^2}(ω_c^2x^2 + ω_c^2y^2 - 24)\\ - 8J_1\left(ω_c\sqrt{x^2 + y^2}\right)(ω_c^2x^2 + ω_c^2y^2 - 6)\big)\Big)\end{array}}{2π(x^2 + y^2)^{7/2}}&\text{otherwise,}\end{cases}\\ h_{1y}(x, y) &= h_{1x}[y, x],\end{align}\tag{3}$$

where $J_\alpha$ is a Bessel function of the first kind of order $\alpha$ and $\propto$ means "is proportional to". I used Wolfram Alpha queries ((ᵈ/dx)³; ᵈ/dx; ᵈ/dx(ᵈ/dy)²) to carry out differentiation, and simplified the result.

Truncated kernels in Python:

import matplotlib.pyplot as plt import scipy import scipy.special import numpy as np def h0x(x, y, omega_c): if x == 0 and y == 0: return 0 return -omega_c**2*x*scipy.special.jv(2, omega_c*np.sqrt(x**2 + y**2))/(2*np.pi*(x**2 + y**2)) def h1x(x, y, omega_c): if x == 0 and y == 0: return 0 return omega_c*x*(3*y**2 - x**2)*(scipy.special.j0(omega_c*np.sqrt(x**2 + y**2))*omega_c*np.sqrt(x**2 + y**2)*(omega_c**2*x**2 + omega_c**2*y**2 - 24) - 8*scipy.special.j1(omega_c*np.sqrt(x**2 + y**2))*(omega_c**2*x**2 + omega_c**2*y**2 - 6))/(2*np.pi*(x**2 + y**2)**(7/2)) def rotatedCosineWindow(N): # N = horizontal size of the targeted kernel, also its vertical size, must be odd. return np.fromfunction(lambda y, x: np.maximum(np.cos(np.pi/2*np.sqrt(((x - (N - 1)/2)/((N - 1)/2 + 1))**2 + ((y - (N - 1)/2)/((N - 1)/2 + 1))**2)), 0), [N, N]) def circularLowpassKernel(omega_c, N): # omega = cutoff frequency in radians (pi is max), N = horizontal size of the kernel, also its vertical size, must be odd. kernel = np.fromfunction(lambda x, y: omega_c*scipy.special.j1(omega_c*np.sqrt((x - (N - 1)/2)**2 + (y - (N - 1)/2)**2))/(2*np.pi*np.sqrt((x - (N - 1)/2)**2 + (y - (N - 1)/2)**2)), [N, N]) kernel[(N - 1)//2, (N - 1)//2] = omega_c**2/(4*np.pi) return kernel def prototype0x(omega_c, N): # omega = cutoff frequency in radians (pi is max), N = horizontal size of the kernel, also its vertical size, must be odd. kernel = np.zeros([N, N]) for y in range(N): for x in range(N): kernel[y, x] = h0x(x - (N - 1)/2, y - (N - 1)/2, omega_c) return kernel def prototype0y(omega_c, N): # omega = cutoff frequency in radians (pi is max), N = horizontal size of the kernel, also its vertical size, must be odd. return prototype0x(omega_c, N).transpose() def prototype1x(omega_c, N): # omega = cutoff frequency in radians (pi is max), N = horizontal size of the kernel, also its vertical size, must be odd. kernel = np.zeros([N, N]) for y in range(N): for x in range(N): kernel[y, x] = h1x(x - (N - 1)/2, y - (N - 1)/2, omega_c) return kernel def prototype1y(omega_c, N): # omega = cutoff frequency in radians (pi is max), N = horizontal size of the kernel, also its vertical size, must be odd. return prototype1x(omega_c, N).transpose() N = 321 # Horizontal size of the kernel, also its vertical size. Must be odd. window = rotatedCosineWindow(N) # Optional window function plot #plt.imshow(window, vmin=-np.max(window), vmax=np.max(window), cmap='bwr') #plt.colorbar() #plt.show() omega_c = np.pi/8 # Cutoff frequency in radians <= pi lowpass = circularLowpassKernel(omega_c, N) kernel0x = prototype0x(omega_c, N) kernel0y = prototype0y(omega_c, N) kernel1x = prototype1x(omega_c, N) kernel1y = prototype1y(omega_c, N) # Optional kernel image save plt.imsave('lowpass.png', plt.cm.bwr(plt.Normalize(vmin=-lowpass.max(), vmax=lowpass.max())(lowpass))) plt.imsave('kernel0x.png', plt.cm.bwr(plt.Normalize(vmin=-kernel0x.max(), vmax=kernel0x.max())(kernel0x))) plt.imsave('kernel0y.png', plt.cm.bwr(plt.Normalize(vmin=-kernel0y.max(), vmax=kernel0y.max())(kernel0y))) plt.imsave('kernel1x.png', plt.cm.bwr(plt.Normalize(vmin=-kernel1x.max(), vmax=kernel1x.max())(kernel1x))) plt.imsave('kernel1y.png', plt.cm.bwr(plt.Normalize(vmin=-kernel1y.max(), vmax=kernel1y.max())(kernel1y))) plt.imsave('kernelkey.png', plt.cm.bwr(np.repeat([(np.arange(321)/320)], 16, 0)))

*Figure 1. Color-mapped 1:1 scale plot of circularly symmetric band-limiting filter impulse response, with cut-off frequency $\omega_c = \pi/8$. Color key: blue: negative, white: zero, red: maximum.*

*Figure 2. Color-mapped 1:1 scale plots of sampled impulse responses of filters in the filter bank, with cut-off frequency $\omega_c = \pi/8$, in order: $h_{0x}$, $h_{0y}$, $h_{1x}$, $h_{0y}$. Color key: blue: minimum, white: zero, red: maximum.*

Directional edge detectors can be constructed as weighted sums of these. In Python (continued):

composite = kernel0x-4*kernel1x plt.imsave('composite0.png', plt.cm.bwr(plt.Normalize(vmin=-composite.max(), vmax=composite.max())(composite))) plt.imshow(composite, vmin=-np.max(composite), vmax=np.max(composite), cmap='bwr') plt.colorbar() plt.show() composite = (kernel0x+kernel0y) + 4*(kernel1x+kernel1y) plt.imsave('composite45.png', plt.cm.bwr(plt.Normalize(vmin=-composite.max(), vmax=composite.max())(composite))) plt.imshow(composite, vmin=-np.max(composite), vmax=np.max(composite), cmap='bwr') plt.colorbar() plt.show()

*Figure 3. Directional edge detection kernels constructed as weighted sums of kernels of Fig. 2. Color key: blue: minimum, white: zero, red: maximum.*

The filters of Fig. 3 should be better tuned for continuous edges, compared to gradient filters (first two filters of Fig. 2).

##### Gaussian filters

The filters of Fig. 2 have a lot of oscillation due to strict band limiting. Perhaps a better staring point would be a Gaussian function, as in Gaussian derivative filters. Relatively, they are much easier to handle mathematically. Let's try that instead. We start with the impulse response definition of a Gaussian "low-pass" filter:

$$h(x, y, \sigma) = \frac{e^{-\displaystyle\frac{x^2 + y^2}{2 \sigma^2}}}{2\pi \sigma^2}.\tag{4}$$

We apply the operators of Eq. 2 to $h(x, y, \sigma)$ and normalize each filter $h_{..}$ by:

$$\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}h_{..}(x, y, \sigma)^2\,dx\,dy = 1.\tag{5}$$

$$\begin{align}h_{0x}(x, y, \sigma) &= 2\sqrt{2\pi}σ^2 \frac{d}{dx}h(x, y, \sigma) = - \frac{\sqrt{2}}{\sqrt{\pi}σ^2} x e^{-\displaystyle\frac{x^2 + y^2}{2σ^2}},\\ h_{0y}(x, y, \sigma) &= h_{0x}(y, x, \sigma),\\ h_{1x}(x, y, \sigma) &= \frac{2\sqrt{3\pi}σ^4}{3}\left(\left(\frac{d}{dx}\right)^3-3\frac{d}{dx}\left(\frac{d}{dy}\right)^2\right)h(x, y, \sigma) = - \frac{\sqrt{3}}{3\sqrt{\pi}σ^4} (x^3 - 3xy^2) e^{-\displaystyle\frac{x^2 + y^2}{2σ^2}},\\ h_{1y}(x, y, \sigma) &= h_{1x}(y, x, \sigma).\end{align}\tag{6}$$

We would like to construct from these, as their weighted sum, the impulse response of a vertical edge detector filter that maximizes specificity $S$ which is the mean sensitivity to a vertical edge over the possible edge shifts $s$ relative to the mean sensitivity over the possible edge rotation angles $\beta$ and possible edge shifts $s$:

$$S = \frac{2\pi\displaystyle\int_{-\infty}^{\infty}\Bigg(\int_{-\infty}^{\infty}\bigg(\int_{-\infty}^{s}h_x(x, y, \sigma)dx - \int_{s}^{\infty}h_x(x, y, \sigma)dx\bigg)dy\Bigg)^2ds} {\Bigg(\displaystyle\int_{-\pi}^{\pi}\int_{-\infty}^{\infty}\bigg(\int_{-\infty}^{\infty}\Big(\int_{-\infty}^{s}h_x\big(\cos(\beta)x- \sin(\beta)y, \sin(\beta)x + \cos(\beta)y\big)dx \\- \displaystyle\int_{s}^{\infty}h_x\big(\cos(\beta)x - \sin(\beta)y, \sin(\beta)x + \cos(\beta)y\big)dx\Big)dy\bigg)^2ds\,d\beta\Bigg)}.\tag{7}$$

We only need a weighted sum of $h_{0x}$ with variance $\sigma^2$ and $h_{1x}$ with optimal variance. It turns out that $S$ is maximized by an impulse response:

$$\begin{align}h_x(x, y, \sigma) &= \frac{\sqrt{7625 - 2440\sqrt{5}}}{61} h_{0x}(x, y, \sigma) - \frac{2\sqrt{610\sqrt{5} - 976}}{61} h_{1x}(x, y, \sqrt{5}\sigma)\\ &= - \frac{\sqrt{(15250 - 4880\sqrt{5}}}{61\sqrt{\pi}σ^2}xe^{-\displaystyle\frac{x^2 + y^2}{2σ^2}} + \frac{\sqrt{1830\sqrt{5} - 2928}}{4575 \sqrt{\pi} σ^4}(2x^3 - 6xy^2)e^{-\displaystyle\frac{x^2 + y^2}{10 σ^2}}\\ &= \frac{2\sqrt{\pi}σ^2\sqrt{15250 - 4880\sqrt{5}}}{61}\frac{d}{dx}h(x, y, \sigma) - \frac{100\sqrt{\pi}σ^4\sqrt{1830\sqrt{5} - 2928}}{183}\left(\left(\frac{d}{dx}\right)^3-3\frac{d}{dx}\left(\frac{d}{dy}\right)^2\right)h(x, y,\sqrt{5}\sigma)\\ &\approx 3.8275359956049814\,\sigma^2\frac{d}{dx}h(x, y, \sigma) - 33.044650082417731\,\sigma^4\left(\left(\frac{d}{dx}\right)^3-3\frac{d}{dx}\left(\frac{d}{dy}\right)^2\right)h(x, y,\sqrt{5}\sigma),\end{align}\tag{8}$$

also normalized by Eq. 5. To vertical edges, this filter has a specificity of $S = \frac{10\times5^{1/4}}{9}$ $+$ $2$ $\approx$ $3.661498645$, in contrast to the specificity $S = 2$ of a first-order Gaussian derivative filter with respect to $x$. The last part of Eq. 8 has normalization compatible with separable 2-d Gaussian derivative filters from Python's `scipy.ndimage.gaussian_filter`

:

import matplotlib.pyplot as plt import numpy as np import scipy.ndimage sig = 8; N = 161 x = np.zeros([N, N]) x[N//2, N//2] = 1 ddx = scipy.ndimage.gaussian_filter(x, sigma=[sig, sig], order=[0, 1], truncate=(N//2)/sig) ddx3 = scipy.ndimage.gaussian_filter(x, sigma=[np.sqrt(5)*sig, np.sqrt(5)*sig], order=[0, 3], truncate=(N//2)/(np.sqrt(5)*sig)) ddxddy2 = scipy.ndimage.gaussian_filter(x, sigma=[np.sqrt(5)*sig, np.sqrt(5)*sig], order=[2, 1], truncate=(N//2)/(np.sqrt(5)*sig)) hx = 3.8275359956049814*sig**2*ddx - 33.044650082417731*sig**4*(ddx3 - 3*ddxddy2) plt.imsave('hx.png', plt.cm.bwr(plt.Normalize(vmin=-hx.max(), vmax=hx.max())(hx))) h = scipy.ndimage.gaussian_filter(x, sigma=[sig, sig], order=[0, 0], truncate=(N//2)/sig) plt.imsave('h.png', plt.cm.bwr(plt.Normalize(vmin=-h.max(), vmax=h.max())(h))) h1x = scipy.ndimage.gaussian_filter(x, sigma=[sig, sig], order=[0, 3], truncate=(N//2)/sig) - 3*scipy.ndimage.gaussian_filter(x, sigma=[sig, sig], order=[2, 1], truncate=(N//2)/sig) plt.imsave('ddx.png', plt.cm.bwr(plt.Normalize(vmin=-ddx.max(), vmax=ddx.max())(ddx))) plt.imsave('h1x.png', plt.cm.bwr(plt.Normalize(vmin=-h1x.max(), vmax=h1x.max())(h1x))) plt.imsave('gaussiankey.png', plt.cm.bwr(np.repeat([(np.arange(161)/160)], 16, 0)))

*Figure 4. Color-mapped 1:1 scale plots of, in order: A 2-d Gaussian function, derivative of the Gaussian function with respect to $x$, a differential operator $\big(\frac{d}{dx}\big)^3-3\frac{d}{dx}\big(\frac{d}{dy}\big)^2$ applied to the Gaussian function, the optimal two-component Gaussian-derived vertical edge detection filter $h_x(x, y, \sigma)$ of Eq. 8. The standard deviation of each Gaussian was $\sigma = 8$ except for the hexagonal component in the last plot which had standard deviation $\sqrt{5}\times8$. Color key: blue: minimum, white: zero, red: maximum.*

TO BE CONTINUED...

**How to detect the rotation angle of coordinates in an image,** I want to detect the rotation angle of the wheel in each frame. frames, I ran a feature detection on the image to get the movement of the image features: Auto Detection of Rotation Angle on Arbitrary Image with Orthogonal , where ϕ is the� Object detection on multi-source images from satellite platforms is difficult due to the characteristics of imaging sensors. Multi-model image fusion provides a possibility to improve the performance of object detection. This paper proposes a fusion object detection framework with arbitrary-oriented region convolutional neural network. First, nine kinds of pansharpening methods are utilized to

**Angle Estimation of Simultaneous Orthogonal Rotations from 3D ,** Such gyroscopes can be used, for example, in automobiles (for ride stabilisation and rollover detection [4]), robotics (in state estimation for legged� The proposed approach combines angle and density features for the classification of ice and non-ice images in this work. In order to assess the contribution of individual features, we produce a confusion matrix and compute a classification rate for only angle-based features and density features, and the results are reported in Table 2.

**How to detect the rotation angle of coordinates in an image,** The other x1 to x4 describe the rotation matrix (caveat, if you have a bit of zoom or so, it's not purely rotation, for that normalise to make the rows orthoNormal!) Automatically determining the origin is impossible from inside the If we take the pixel as a function of time, x, and y ( I(x,y,t) ) then taking the time� cropped portions of larger images, hence the full 352x288 pixel resolution is not used. All image-processing software was implemented using MATLAB version 6.5. Figure 1. Digital images taken from a web-cam of line-segments at orthogonal and arbitrary angles. 3. ANGLE EXTRACTION ALGORITHM The first step of the algorithm is the reading of an

Find Select the objects to rotate. Specify the base point for the rotation. Enter r (Reference). Enter a reference angle value or specify two point locations. This determines an imaginary line that will be rotated to a new angle. Enter the new angle, or specify a point. The value that you enter for the new angle is an absolute angle, not a

13 Auto Detection of Rotation Angle on Arbitrary Image with Orthogonal Features May 12 '19. 12 limit to possible noise shaping? Feb 15 '18.

I have some videos of a spinning wheel (about 60 - too many to process them manually). I want to detect the rotation angle of the wheel in each frame. So my idea was this: After splitting them up into single frames, I ran a feature detection on the image to get the movement of the image features: Now there are three groups of coordinates:

##### Comments

- It is very sad that it was transitioned from stackoverflow to dsp. I do not see a DSP-like solution here, and chances are now much reduced. 99.9% of DSP algorithms and tricks are useless for this task. It seems like custom algorithm or approach is needed here, not an FFT.
- I'm super happy to tell you that it's totally wrong to be sad; DSP.SE is the absolute right place to ask this! (not so much stackoverflow. It's not a programming question. You know your programming. You don't know how to process this image.) Images are signals, and DSP.SE concerns itself very much with image processing! Also, a great deal of general DSP tricks (even as known for e.g. communication signals) are very applicable to your problem :)
- How important is efficiency?
- by the way, even when running with a 0.04° resolution, I'm pretty sure the rotation is exactly 32°, not 32.19° – what are the resolutions of your original photography? Because at 800 px width, an uncorrected rotation of 0.01° is but 0.14 px height difference, and that would even under sinc interpolation be barely noticeable.
- @CedronDawg Definitely no real-time requirements, I can tolerate some 10-60 seconds of computation on some 8-12 cores.
- How accurate the result you got?
- @Royi Maybe around 0.1 deg.
- @OlliNiemitalo which is pretty impressive, given the limited resolution!
- @OlliNiemitalo speaking of impressive: this. answer. is. that. word's. very. definition.
- Encouraging you and others to write answers like this is what I had in mind. Please don't feel like you owe me any other great answers or so! Remember: StackExchange Reputation is just literally imaginary points that strangers on the internet give each other to say who writes good answers. If I told my father that I got fake internet points for answering questions he'd – rightfully so – need a longer explanation on how that's actually rewarding :)
- hey, you don't need to be sorry for something that was a very clever approach, and might be super helpful for someone with a
*similar*problem who'll come here later! +1