## An efficient way to simulate many particle collisions?

I would like to write a small program simulating many particle collisions, starting first in 2D (I would extend it to 3D later on), to (in 3D) simulate the convergence towards the Boltzmann distribution and also to see how the distribution evolves in 2D.

I have not yet started programming, so please don't ask for code samples, it is a rather general question that should help me get started. There is no problem for me with the physics behind this problem, it is rather the fact that I will have to simulate at least 200-500 particles, to achieve a pretty good speed distribution. And I would like to do that in real time.

Now, for every time step, I would update first the position of all the particles and then check for collisions, to update the new velocity vector. That, however, includes a lot of checkings, since I would have to see if every single particle undergoes a collision with every other particle. I found this post to more or less the same problem and the approach used there was also the only one I can think of. I am afraid, however, that this will not work very well in real time, because it would involve too many collision checks.

So now: Even if this approach will work performance wise (getting say 40fps), can anybody think of a way to avoid unnecessary collision checks?

My own idea was splitting up the board (or in 3D: space) into squares (cubes) that have dimensions at least of the diameters of the particles and implement a way of only checking for collisions if the centres of two particles are within adjecent squares in the grid...

I would be happy to hear more ideas, as I would like to increase the amount of particles as much as I can and still have a real time calculation/simulation going on.

**Edit:** All collisions are purely elastic collisions without any other forces doing work on the particles. The initial situation I will implement to be determined by some variables chosen by the user to choose random starting positions and velocities.

**Edit2:** I found a good and very helpful paper on the simulation of particle collision here. Hopefully it might help some People that are interested in more depth.

**[PDF] Algorithms for Particle-Field Simulations with Collisions,** We develop an efficient algorithm for detecting collisions among a large number To simulate the system (1) with nonzero F, the natural approach for many trained in particle pair in some way might result in overlap between one of the two� In order to simulate particle–particle collision, the hard-sphere model can be applied, which is based on the proposal by Hoomans et al. (1996). The main assumptions of the model, which are concerned with the particle shape, the deformation history during collision and the nature of collisions, are as follows:

A good high level example of spatial division is to think about the game of pong, and detecting collisions between the ball and a paddle.

Say the paddle is in the top left corner of the screen, and the ball is near the bottom left corner of the screen...

-------------------- |▌ | | | | | | ○ | --------------------

It's not necessary to check for collision each time the ball moves. Instead, split the playing field into two right down the middle. Is the ball in the left hand side of the field? (simple point inside rectangle algorithm)

Left Right | ---------|---------- |▌ | | | | | | | | | ○ | | ---------|---------- |

If the answer is yes, split the the left hand side again, this time horizontally so we have a top left and a bottom left partition.

Left Right | ---------|---------- |▌ | | | | | ----------- | | | | | ○ | | ---------|---------- |

Is this ball in the same top-left corner of the screen as the paddle? If not, no need to check for collision! **Only objects which reside in the same partition need to be tested for collision with each other**. By doing a series of simple (and cheap) point inside rectangle checks, you can easily save yourself from doing a more expensive shape/geometry collision check.

You can continue splitting the space down into smaller and smaller chunks until an object spans two partitions. This is the basic principle behind BSP (a technique pioneered in early 3D games like Quake). There is a whole bunch of theory on the web about spatial partitioning in 2 and 3 dimensions.

http://en.wikipedia.org/wiki/Space_partitioning

In 2 dimensions you would often use a BSP or quadtree. In 3 dimensions you would often use an octree. However the underlying principle remains the same.

**Simulating particle collision process based on Monte Carlo method,** We are going to use CUDA to compute an N-body simulation with collisions There are ways to approximate the forces via algorithms like Barnes-Hut and the Fast the two particles, in which case our simulation becomes a collision simulation. We will be using various different GPU's in order to see if the problem scales� "Maybe particle collisions, maybe the early universe after the Big Bang? Can we use a quantum computer to simulate them and tell us what to expect?" Questions like these involve tracking the interaction of many different elements, a situation that rapidly becomes too complicated for today's most powerful computers.

You can think along the line of 'divide and conquer'. The idea is to identify orthogonal parameters with don't impact each other. e.g. one can think of splitting momentum component along 2 axis in case of 2D (3 axis in 3D) and compute collision/position independently. Another way to to identify such parameters can be grouping of particles which are moving perpendicular to each other. So even if they impact, net momentum along those lines doesn't change.

I agree above doesn't fully answer your question, but it conveys a fundamental idea which you may find useful here.

**Parallelizing an N-Body-like simulation using CUDA,** driven particle dynamics methods for analyzing flowing particulate media. in many ways boosted my enthusiasm to continue further research for my doctoral� Most collision detection in computer games is done using the AABB technique: very simply, if two rectangles intersect, then a collision has occurred. It's fast, efficient, and incredibly effective – for rectangular objects. But what if we wanted to smash circles together?

Let us say that at time t, for each particle, you have:

P position V speed

and a N*(N-1)/2 array of information between particle A(i) and A(j) where i < j; you use symmetry to evaluate an upper triangular matrix instead of a full N*(N-1) grid.

MAT[i][j] = { dx, dy, dz, sx, sy, sz }.

which means that in respect to particle j, particle j has distance made up of three components dx, dy and dz; and a delta-vee multipled by dt which is sx, sy, sz.

To move to instant t+dt you tentatively update the positions of all particles based on their speed

px[i] += dx[i] // px,py,pz make up vector P; dx,dy,dz is vector V premultiplied by dt py[i] += dy[i] // Which means that we could use "particle 0" as a fixed origin pz[i] += dz[i] // except you can't collide with the origin, since it's virtual

Then you check the whole N*(N-1)/2 array and tentatively calculate the new relative distance between every couple of particles.

dx1 = dx + sx dy1 = dy + sy dz1 = dz + sz DN = dx1*dx1+dy1*dy1+dz1*dz1 # This is the new distance

If DN < D^2 with D diameter of the particle, you have had a collision in the dt just past.

You then calculate exactly where this happened, i.e. you calculate the exact d't of collision, which you can do from the old distance squared D2 (dx*dx+dy*dy+dz*dz) and the new DN: it's

d't = [(SQRT(D2)-D)/(SQRT(D2)-SQRT(DN))]*dt

(Time needed to reduce distance from SQRT(D2) to D, at a speed that covers the distance SQRT(D2)-SQRT(DN) in time dt). **This makes the hypothesis that particle j, seen from the refrence frame of particle i, hasn't "overshooted"**.

It is a more hefty calculation, but you only need it when you get a collision.

Knowing d't, and d"t = dt-d't, you can repeat the position calculation on Pi and Pj using dx*d't/dt etc. and obtain the exact position P of particles i and j at the instant of collision; you update speeds, then integrate it for the remaining d"t and get the positions at the end of time dt.

Note that if we stopped here this method would break if a three-particle collision took place, and would only handle two-particle collisions.

So instead of running the calculations we just mark that a collision occurred at d't for particles (i,j), and at the end of the run, we save the minimum d't at which a collision occurred, and between whom.

I.e., say we check particles 25 and 110 and find a collision at 0.7 dt; then we find a collision between 110 and 139 at 0.3 dt. There are no collisions earlier than 0.3 dt.

We enter collision updating phase, and "collide" 110 and 139 and update their position and speed. Then repeat the 2*(N-2) calculations for each (i, 110) and (i, 139).

We will discover that there probably still is a collision with particle 25, but now at 0.5 dt, and maybe, say, another between 139 and 80 at 0.9 dt. 0.5 dt is the new minimum, so we repeat collision calculation between 25 and 110, and repeat, suffering a slight "slow down" in the algorithm for each collision.

Thus implemented, the only risk now is that of "ghost collisions", i.e., a particle is at D > diameter from a target at time t-dt, and is at D > diameter *on the other side* at time t.

This you can only avoid by choosing a dt so that no particle ever travels more than half its own diameter in any given dt. Actually, you might use an adaptive dt based on the speed of the fastest particle. Ghost glancing collisions are still possible; a further refinement is to reduce dt based on the nearest distance between any two particles.

This way, it is true that the algorithm slows down considerably in the vicinity of a collision, but it speeds up enormously when collisions aren't likely. If the minimum distance (which we calculate at almost no cost during the loop) between two particles is such that the fastest particle (which also we find out at almost no cost) can't cover it in less than fifty dts, that's a 4900% speed increase right there.

Anyway, in the no-collision generic case we have now done five sums (actually more like thirty-four due to array indexing), three products and several assignments for every particle couple. If we include the (k,k) couple to take into account the particle update itself, we have a good approximation of the cost so far.

This method has the advantage of being O(N^2) - it scales with the number of particles - instead of being O(M^3) - scaling with the volume of space involved.

I'd expect a C program on a modern processor to be able to manage in real time a number of particles in the order of the tens of thousands.

P.S.: this is actually very similar to Nicolas Repiquet's approach, including the necessity of slowing down in the 4D vicinity of multiple collisions.

**[PDF] Discrete Particle Simulation Techniques for the Analysis of Colliding ,** Simulate the motion of N colliding particles according to the laws of elastic collision using event-driven simulation. This event-driven approach results in a more robust, accurate, and efficient simulation than the time-driven one. N big particles so there isn't much room to move without hitting something. Cell method. Quantum computers will be able to simulate particle collisions (w/ Video) physicists might one day use quantum computers to study the inner workings of the universe in ways that are far beyond

Until a collision between two particles (or between a particle and a wall), happens, the integration is trivial. The approach here is to calculate the time of the first collision, integrate until then, then calculate the time of the second collision and so on. Let's define `tw[i]`

as the time the `i`

th particle takes to hit the first wall. It is quite easy to calculate, although you must take into account the diameter of the sphere.

The calculation of the time `tc[i,j]`

of the collision between two particles `i`

and `j`

takes a little more time, and follows from the study in time of their distance `d`

:

`d^2=Δx(t)^2+Δy(t)^2+Δz(t)^2`

We study if there exists `t`

positive such that `d^2=D^2`

, being `D`

the diameter of the particles(or the sum of the two radii of the particles, if you want them different). Now, consider the first term of the sum at the RHS,

`Δx(t)^2=(x[i](t)-x[j](t))^2=`

`Δx(t)^2=(x[i](t0)-x[j](t0)+(u[i]-u[j])t)^2=`

`Δx(t)^2=(x[i](t0)-x[j](t0))^2+2(x[i](t0)-x[j](t0))(u[i]-u[j])t + (u[i]-u[j])^2t^2`

where the new terms appearing define the law of motion of the two particles for the `x`

coordinate,

`x[i](t)=x[i](t0)+u[i]t`

`x[j](t)=x[j](t0)+u[j]t`

and `t0`

is the time of the initial configuration. Let then `(u[i],v[i],w[i])`

be the three components of velocities of the `i`

-th particle. Doing the same for the other three coordinates and summing up, we get to a 2nd order polynomial equation in `t`

,

`at^2+2bt+c=0`

,

where

`a=(u[i]-u[j])^2+(v[i]-v[j])^2+(w[i]-w[j])^2`

`b=(x[i](t0)-x[j](t0))(u[i]-u[j]) + (y[i](t0)-y[j](t0))(v[i]-v[j]) + (z[i](t0)-z[j](t0))(w[i]-w[j])`

`c=(x[i](t0)-x[j](t0))^2 + (y[i](t0)-y[j](t0))^2 + (z[i](t0)-z[j](t0))^2-D^2`

Now, there are many criteria to evaluate the existence of a real solution, etc... You can evaluate that later if you want to optimize it. In any case you get `tc[i,j]`

, and if it is complex or negative you set it to plus infinity. To speed up, remember that `tc[i,j]`

is symmetric, and you also want to set `tc[i,i]`

to infinity for convenience.

Then you take the minimum `tmin`

of the array `tw`

and of the matrix `tc`

, and integrate in time for the time `tmin`

.

You now subtract `tmin`

to all elements of `tw`

and of `tc`

.

In case of an elastic collision with the wall of the `i`

-th particle, you just flip the velocity of that particle, and recalculate only `tw[i]`

and `tc[i,k]`

for every other `k`

.

In case of a collision between two particles, you recalculate `tw[i],tw[j]`

and `tc[i,k],tc[j,k]`

for every other `k`

. The evaluation of an elastic collision in 3D is not trivial, maybe you can use this

http://www.atmos.illinois.edu/courses/atmos100/userdocs/3Dcollisions.html

About how does the process scale, you have an initial overhead that is `O(n^2)`

. Then the integration between two timesteps is `O(n)`

, and hitting the wall or a collision requires `O(n)`

recalculation. But what really matters is how the average time between collisions scales with `n`

. And there should be an answer somewhere in statistical physics for this :-)

Don't forget to add further intermediate timesteps if you want to plot a property against time.

**Event-Driven Simulation,** What I want to figure out is the velocities (including direction) of both the particles after the collision, and I want to calculate it in a efficient way. I know this is more of a physics question than a computer programming one, but programmers always seem to be better at finding the most efficient way of doing something.

However, in more dilute flows, where the particle displacement within one time-step may be much larger, it is not computationally efficient to simulate the interaction directly, rather incorporating the interaction in the collision model is more desirable and in many cases even the only practicable option.

What is AABB-Collision detection? How to predict encounters between a ship and a body's sphere of influence in 2D ; Checking Collision of Shapes with JavaFX ; An efficient way to simulate many particle collisions? How to detect collision in three.js?

National Institute of Standards and Technology (NIST). "Quantum computers will be able to simulate particle collisions."

##### Comments

- "
*Premature optimization is the root of all evil*" - Donald Knuth. You should first program it, then if it doesn't work fast enough - optimize. - @NikoDrašković I don't fully agree with that. The point is that there is no point in starting one way of doing something and then if there is a better way that is completely different to start all over again. As I said I try to find the most efficient way possible to simulate as many particles as possible.
- I guess I misunderstood you then - you're not looking to simulate 100-500 particles but as much as you can. Though if you implement it straightforward it's really not all that different when you optimize - it's not
*starting all over again*. - Sorry, I will update my Post a bit. I meant "at least 100-500". Well, that really depends on the method. I just don't want to get into some sort of optimisations that will lead to disorganised code, but instead start nicely with a useful method and THEN optimise that method, if possible...
- are we talking about particles in free space, no drag, just dx/dt=v*dt+x_0 and elastic collisions?
- Notice that it can happen that a single cell contains many, many segments. In such case you must iterate again, subgridding that single cell and reapplying the algorithm. For many particles, the optimal size of the grid may become very small.
- That's where the recursive trait of octrees came in handy :)
- I don't see why you need octres, it seems to me that you need to subgrid only one cell at a time.
- Another thing. This works well in 2D. In 3D two particles can collide, BUT their trajectories can go through different cells. With trajectory I mean the path that their centres would take.
- @NicolasRepiquet I don't directly see why you need Octrees either, as for 3D it is obvious that all lines will lie at the same time, therefore, 3D lines are enough for 3D particle simulations. Good method though, thank you!