I appreciate that these dice/spinning/etc. images are not so good. Also I appreciate the 99% to 1% kind of ratio, hence the need to find an input into the "equation" besides just free will by itself.
The overarching concept is the difference between active and passive information. Some information we have encoded into us and we call it to mind actively. Much information is relayed through our senses, however, relative to which we are "passive" in the technical sense (as in passible vs. impassible beings). Accordingly, our physical actions are caused by a peculiar conjunction of our pure active potential on the one hand (free will
per se), and our empirical inclinations (desires/wants/drives/instincts/hungers). If free will were the only cause, the probability would be equally divided; but we do not tend to believe in such a division, so the counterweight, to find the most probable action in a given situation, is ((1/X) + Y)/2, when X = the number of abstract choices for free will in general and Y = 100% for our passive causality, so that the action we do is as probable as whatever that "equation" ends up with, like 62.5% in the original example.
Illustrated (sort of):
Let's suppose I have an abstract choice between A, B, C, and D, and that under the circumstances the D is what I most want

Now, since we are not solely determined by passive information, it was not actually 100% probable that I would do D

However, it was not merely 25% probable, either. Rather it was (100%+25%)/2, or 62.5% probable. (And this would turn out to be somewhat formulaic, as long as the balance was between abstract activity and concrete experience with X held constant; but one might wonder whether, rather, X might vary.)
Now a corollary of all this, I guess, would be that if I did NOT do D

but chose, say, the C :O then the probability I would do that is actually (0%+25%)/2, or, that is, there is only ever a 12.5% chance that I would act contrary to my passive feelings.*
(Keep in mind, I'm working with a pretty primitive take on probability; I've no idea how Bayesian ideas or who knows what else, would fit into this, to say nothing of how to bridge quantum statistics with macro-volitional ones...)
*EDIT: Now, one might object, what about cases where my choice is just between different passive feelings? However, again technically, I think this situation cannot arise; there is always the limit case of deciding to shuck feelings as much as possible, eh? This does hint at a variable X, though, I suppose.