On the variability page of the TQ assessment, the coefficient of variability is the Variance (standard deviation squared) divided by the mean. It shows the number of times that variance is greater than the average, so a higher number means a much more spread out distribution–less effective control of brain activity in that area and in that frequency. Variability in one frequency doesn’t necessarily have any relationship to variability at others, so if beta for example is surging, that doesn’t mean that all the other frequencies are as well. Brain activation is like a truck rolling down a steep mountain. Without brakes–control circuits to control all that energy–it literally runs away from itself. These control circuits are critical in effective brain operation. When they are too weak, when variability is high, the client will have difficulty using that/those parts of the brain. But if the brakes are too tight, the truck will burn them up running down the mountain. When I speak of overly controlled frequencies at sites, I am saying that there is too little variability. The client likely gets stuck. This often combines with high levels of fast-wave coherence.
I usually look for variance of 1-2 as being a pretty reasonable range. Coefficients of variability below 1 indicate excessive control; those which are well over 2 are probably weak and under-controlled. The further above 2, the more likely training will be valuable in that area and frequency. Brains being what they are, I probably see a dozen under-controlled/hyper-variable clients for every 1 I see the in hypo-variable mode.
You have to be careful when examining variability, because obviously one thing that could cause a highly variable signal would be something from outside the brain. Muscle bracing or clenching often results in highly variable high-beta signals. When the muscles are clenched, amplitude goes way up; when they are relaxed, amplitudes come way down. Eye blinks in slow frequencies will have the same effect.
Reducing Variability
One of the oldest and still very effective ways of reducing variability in the EEG (i.e. activating control circuits) is to train down amplitude. Most amplitude training works by setting inhibits at around 75-85% reward, which means that the training is cutting off the “outliers”–those excursions in the EEG that are furthest from the mean. The more successfully the brain reduces these events, the lower goes the mean, and the lower the variance. This kind of training has the benefit of being most immediate in its responses (little or no calculation required.).
I’ve used the Range Threshold objects several times. They are more directly related to training variability, since they allow you to set thresholds in terms of standard deviations instead of amplitude. I suspect there is a trick to setting the speed of resetting the calculations (which do delay a bit more than amplitude readings), so the SD doesn’t keep changing rapidly, but I can’t say that I have ever spent enough time with it to know what that trick might be.
I’ve also had some success with designs that train down amplitude and variability at the same time.
Understanding the Statistics of Variability
On the Variance page, the numbers are not true Coefficient of Variance (which would be the standard deviation of the amplitudes divided by the mean). They are actually the Variance (standard deviation squared to remove the positive and negative signs) divided by the mean, which makes them show outliers more clearly.
Quick course in stats. If we take the average of all the points measured for, say, theta, that is the mean for theta. The average value.
Then if we take the difference between each individual point and that average, add them all together and take the average of those differences, that would be the standard deviation–a measure of how close the actual signals were to the average of all of them. Obviously the greater this number, the more spread out the distribution. A large S.D. suggests that there are a lot of very high and very low signals, and the mean is somewhere in the middle. A small S.D. suggests that the signal is pretty consistent, with small differences between individual measurements.
The problem with the S.D. is that some of the differences are positive (above the average) and some are negative (below the average). So there is also a measure called Variance, which is the S.D. multiplied by itself (squared). Since a negative number squared gives a positive number and a positive number squared gives a positive number, the Variance removes the effect of high or low differences–it’s just the distance away from the mean, though because it is squared it will be a larger number for most distributions.
The coefficient of variance is usually the standard deviation (average difference between points and their average) divided by the average itself. So a very small signal, like maybe beta, and a very large signal, like maybe alpha, are sort of brought onto the same scale.
The variance divided by the mean would do sort of the same thing, but it tends to make more widely spread sets of data points stand out more clearly. That’s what is shown in the TQ Assess BE Pro and now in the TLC Assess Pro for Thought Technology.