## Is there a fast Python algorithm to find all points in a dataset which lie in a given circle?

I have a large amount of data (over 10^5 points). I am searching a fast algorithm which finds all points in the dataset, which lie in a circle given by its center point and radius.

I thought about using an kd-tree to calculate for example the 10 nearest points to the circle's center, and then check if they are inside the circle. But I am not sure if this is the correct way.

**Queries on count of points lie inside a circle,** Given n coordinate (x, y) of points on 2D plane and Q queries. There are only 3 points lie inside or on the circumference of the circle. can be for each query, traverse through all points and check the condition. C++; Java; Python 3; C# Learn; Algorithms � Data Structures � Languages � CS Subjects� This is also why the algorithm will end up checking most points in 3d like all other NNS algorithms, even the less elegant space partitioning ones. A lot of other related algorithms could be made this way using a different metric or a slight modification. – Tatarize Aug 6 '17 at 4:26

I benchmarked a `numexpr`

version against a simple Numpy implementation as follows:

#!/usr/bin/env python3 import numpy as np import numexpr as ne # Ensure repeatable, deterministic randomness! np.random.seed(42) # Generate test arrays N = 1000000 X = np.random.rand(N) Y = np.random.rand(N) # Define centre and radius cx = cy = r = 0.5 def method1(X,Y,cx,cy,r): """Straight Numpy determination of points in circle""" d = (X-cx)**2 + (Y-cy)**2 res = d < r**2 return res def method2(X,Y,cx,cy,r): """Numexpr determination of points in circle""" res = ne.evaluate('((X-cx)**2 + (Y-cy)**2)<r**2') return res def method3(data,a,b,r): """List based determination of points in circle, with pre-filtering using a square""" in_square_points = [(x,y) for (x,y) in data if a-r < x < a+r and b-r < y < b+r] in_circle_points = [(x,y) for (x,y) in in_square_points if (x-a)**2 + (y-b)**2 < r**2] return in_circle_points # Timing %timeit method1(X,Y,cx,cy,r) %timeit method2(X,Y,cx,cy,r) # Massage input data (before timing) to match agorithm data=[(x,y) for x,y in zip(X,Y)] %timeit method3(data,cx,cy,r)

I then timed it in IPython as follows:

%timeit method1(X,Y,cx,cy,r) 6.68 ms ± 246 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit method2(X,Y,cx,cy,r) 743 µs ± 17.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) %timeit method3(data,cx,cy,r) 1.11 s ± 9.81 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

So the `numexpr`

version came out 9x faster. As the points lie in the range [0..1], the algorithm is effectively calculating `pi`

and the two methods come out the same:

method1(X,Y,cx,cy,r).sum() 784973 method2(X,Y,cx,cy,r).sum() 784973 len(method3(data,cx,cy,r)) 784973 4 * 784973 / N 3.139

**Note**: I should point out that `numexpr`

multi-threads your code across multiple CPU cores for you, automatically. If you feel like experimenting, with the number of threads, you can change it dynamically before calling `method2()`

, or even inside there, with:

# Split calculations across 6 threads ne.set_num_threads(6)

Anyone else wishing to test the speed of their method is welcome to use my code as a benchmarking framework.

**Sum of Manhattan distances between all pairs of points ,** Given n integer coordinates. The task is to find sum of manhattan distance between all pairs of coordinates. Manhattan Distance between two� The yellow points are the points found by the algorithm inside the query rectangle shown. The next animations show the nearest neighbor search algorithm for a given query point (the fixed white point with black border: the point (0.3, 0.9)) and how the the branches are traversed and the points (nodes) are visited in the 2-d-tree until the

To check whether a point `(a, b)`

is within a circle of center `(x, y)`

and radius `r`

, then you can simply do a computation:

within_circle = ((x-a)**2 + (y-b)**2) <= r*r)

This equation uses the property of the circle on which can get the absolute distance to a point (which is also used in the distance formula if you noticed).

**What is the fastest algorithm that finds the farthest 2 points in multi ,** The Brute force algorithm takes O(n^2) time, is there a faster exact algorithm? In the theoretical worst-case all points would lie on the bounding circle or If the points lying on a bounding circle are "sorted" (cyclically, of course), finding the pair dataset or IRIS dataset to say that the particular clustering quality measure is� Introduction Searching for data stored in different data structures is a crucial part of pretty much every single application. There are many different algorithms available to utilize when searching, and each have different implementations and rely on different data structures to get the job done. Being able to choose a specific algorithm for a given task is a key skill for developers and can

If you want first to filter a large amount of your dataset without huge computations, you can use the Square of size (2r x 2r) with the same center as the circle (where r is the circle's radius).

Have a look at this picture :

If you have the center's coordinates (a,b) and r the radius, then the points (x,y) inside the square verify :

in_square_points = [(x,y) for (x,y) in data if a-r < x < a+r and b-r < y < b+r]

And finally after this filter you can apply the circle equation :

in_circle_points = [(x,y) for (x,y) in in_square_points if (x-a)**2 + (y-b)**2 < r**2]

** **EDIT** **

if your input is structured like this :

data = [ [13, 45], [-1, 2], ... [60, -4] ]

Then you can try, if you prefer common for-loops :

in_square_points = [] for i in range(len(data)): x = data[i][0] y = data[i][1] if a-r < x < a+r and b-r < y < b+r: in_square_points.append([x, y]) print(in_square_points)

**Computational Geometry in Python,** Geometric Query Problems: Given a set of known objects (the search space) and We can find their equations, compute the distance between a point and a The next geometrical concept we are to explore is the circle. We need robust and fast algorithms to manipulate and extract information from them. Given below are all the algorithms that are implemented in python: 1- Sort Algorithms: Given below are the various sorting algorithms that are implemented in python: 1.1 Bubble Sort: Bubble sort or sinking sort, is a simple sorting algorithm that repeats steps through the list to be sorted, compares each pair of adjacent items and if they are

If you are only interested in the number of points which are in the circle you can try Numba.

import numpy as np import numba as nb import numexpr as ne def method2(X,Y,cx,cy,r): """Numexpr method""" res = ne.evaluate('((X-cx)**2 + (Y-cy)**2) < r**2') return res @nb.njit(fastmath=True,parallel=True) def method3(X,Y,cx,cy,r): acc=0 for i in nb.prange(X.shape[0]): if ((X[i]-cx)**2 + (Y[i]-cy)**2) < r**2: acc+=1 return acc

**Timings**

# Ensure repeatable, deterministic randomness! np.random.seed(42) # Generate test arrays N = 1000000 X = np.random.rand(N) Y = np.random.rand(N) # Define centre and radius cx = cy = r = 0.5 #@Mark Setchell %timeit method2(X,Y,cx,cy,r) #825 µs ± 22.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) %timeit method3(X,Y,cx,cy,r) #480 µs ± 94.4 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

**Check if point lies inside rectangle python,** 0): #checking if point lies inside rectangle using the algorithm explained I want to find all points in the rectangle. Write a program to determine whether a given� The first step of this Data Science algorithm involves plotting all the data items as individual points in an n-dimensional graph. Here, n is the number of features and the value of each individual feature is the value of a specific coordinate. Then we find the hyperplane that best separates the two classes for classifying them.

Isolation Forest is an unsupervised learning algorithm that belongs to the ensemble decision trees family. This approach is different from all previous methods. All the previous ones were trying to find the normal region of the data then identifies anything outside of this defined region to be an outlier or anomalous. This method works differently.

Algorithm is a step-by-step procedure, which defines a set of instructions to be executed in a certain order to get the desired output. Algorithms are generally created independent of underlying languages, i.e. an algorithm can be implemented in more than one programming language.

Machine learning algorithms can be broadly classified into two types - Supervised and Unsupervised.This chapter discusses them in detail. Supervised Learning. This algorithm consists of a target or outcome or dependent variable which is predicted from a given set of predictor or independent variables.

##### Comments

- One tip is not to take the square root... leave the distance as the sum of 2 squares but compare against the square of the radius.
- I would start with NumPy arrays (so C under the hood) but if the algorithm must be in Python, look at numba in order to speed it up further. Working on the GPU might also be an option here, see for example pygpu. 10E5 points is not really large. That should take only fractions of a second.
- Did my answer, or any others, sort out your problem? If so, please consider accepting it as your answer - by clicking the hollow tick/checkmark beside the vote count. If not, please say what didn't work so that I, or someone else, can assist you further. Thanks. meta.stackexchange.com/questions/5234/…
- I find this idea quite interesting. I will try to implement it and hope that I can post my code if there is any trouble with it.
- @loop_ It is worth noting that a KD-Tree creation complexity is
`O(n*log(n))`

and that a simple search like the one proposed by others is`O(n)`

only. If, for this dataset, you only want to perform 1 search, then it is not efficient to build a KD-Tree. However, if you need to perform many search within the same dataset (more that`k*log(n)`

, where`k`

is a factor depending on the KD-Tree creation algorithm), then it is worth using a KD-Tree because the creation complexity will be amortized with time. - Assuming you are doing many queries, I'd still recommend benchmarking against the solution Mark Setchell posted - specifically because this is Python, so a theoretically efficient data structure written in pure Python can easily lose to a simple one implemented in C (as
`numpy`

is) even for n ~ 10^5 or more. - Thank you, I will also try this approach. Since I am not used to the short notation of the loops, is there a way to also get the index of the (x,y)?
- I updated my answer to rewrite the algorithm with a common for-loop, tell me if it's still not what you need
- This is what I actually just implemented. And it works pretty good. Thank you very much!