Last night, in the shower, I realized that my collision detection routine could be even faster. Here is a representative snippet of code from my previous post:

```
d2 = (x-self.x[0:i])*(x-self.x[0:i]) + (y-self.y[0:i])*(y-self.y[0:i]) + (z-self.z[0:i])*(z-self.z[0:i])
```

For some reason, I used the code (x-self.x)*(x-self.x) instead of (x-self.x)**2. Upon further reflection, I realized that (x-self.x)*(x-self.x) computes the difference between array elements twice, and then multiplies the results. Using a “power function” should enable the interpreter to compute the difference only once, and then multiply each element times itself. Here is the updated code, using Python’s power operator:

d2 = (x-self.x[0:i])**2 + (y-self.y[0:i])**2 + (z-self.z[0:i])**2

The previous version of the code ran in 46.3 seconds on a representative problem–the updated version ran in only 35.8 seconds on the same problem. That’s a pretty good improvement. Out of curiosity, I decided to try Numpy’s power() function in place of Python’s power operator:

two = i d2 = power(x-self.x[0:i],two) + power(y-self.y[0:i],two) + power(z-self.z[0:i],two)nt_(2)

This version took 55.5 seconds to execute. Using a Numpy integer scalar instead of a Python integer made no difference. Numpy’s power() function is quite sophisticated–it can take two array arguments and compute element-by-element powers. I suspect that this flexibility results in some overhead that slows it down compared to Python’s built-in power operator.