AcuTrust Entropy Attacks
Example of an incorrect password showing randomness
When a valid password is entered the following is displayed:
Example of a correct password showing text
The problem with this is that the relative entropy is far different between the two types of images. The first image aims to be highly random and as such the relative "nearness" of other pixels to one another is very low. Also the number of "clusters" is far lower when the password is incorrect because fewer pixles are close to one another. A cluster is defined as any pixel that is within one pixel of another pixel on any of the horizontal, vertical or on the angled axis.
To gauge the dispersion, a program to do analysis was created to take the points on the image and "plot" them against the relative clusters. For every cluster it is plotted on the graph along with all of the adjacent clusters. This was plotted against the X axis which gave the total number of clusters on the horizontal axis. This makes the graph appear more dramatic without being presumptuous about what the cluster "means" (in terms of text or noise) or where it resides by compressing the data into one dimension. This compressed information is then graphed against the number of clusters found on that axis.
Clustering, instead of individual pixel analysis, has the side effect of removing solitary pixel noise that AcuTrust introduces (presumably to deter optical recognition). Actually, AcuTrust really uses 2x2 pixel DIV layers, but only to make the final text larger and more readable.
When the data points are plotted, the relative entropy becomes highly apparent as there are few clusters and very low nearness when the password is incorrect. There are a total of between 28 and 29 sets of clusters or horizontal slices (out of a total possible of 29 slices) in this first graph which represents three different incorrect passwords. This includes a total of between 150 and 204 individual clusters after three tests. That calculates to an average of between 5.0 and ~7.0 clusters per set. Averaging removes false positives by looking at the context of the clusters in aggregate, rather than at any one spike, which could be created randomly.
The Y axis on the graph is an iteration through the X axis of the dynamic image. The X axis on the graph is the amount of nearness every pixel encounters relative to itself (this has a multiplicative effect which makes the graph more dramatic on the Y axis):
Three incorrect passwords with high randomness/dispersion
If the graph of incorrect passwords is compared against the graph of a correct passwords (as seen below) the relative "nearness" is far higher and the number of clusters is also far higher. There are between 23 and 24 sets of clusters or horizontal slices (out of 29 possible) in the graph below representing a total ranging between 660 and 798 individual clusters with an average of between ~28.7 and ~34.7 clusters per set or horizontal slice (compared to between 5.0 and ~7.0 in the first graph):
Three correct passwords with low randomness/dispersion
In the second graph, the easiest way to read it is to think about placing it on its side with the spikes pointing to the right. The first set of spikes on the left hand side of the graph represents the majority of the text "AcuTrust DEMO" on the dynamic image. The big dip in the center is where the whitespace exists between the two lines of text. The second spike is the second line "7/18/05". The lack of clusters on both extremes of the lower graph show the whitespace that surrounds the text. With relative ease this method reliably gives a positive fingerprint on valid passwords.