strange independence

$$% \gdef\bar#1{\overline{#1}} \gdef\and{\cap} \gdef\or{\cup} $$

The definition of independence of two events in a sample space is that the probability of the one doesn't depend on the other: $$ P(A|B) = P(A) $$ This definition feels necessary, but not sufficient. I can imagine at least two complicating cases:

symmetries

First, the toy examples with dice and stuff are all symmetric. But it feels like there are situations where the dependence is unidirectional. For instance, whether the bank is open depends on what day of the week it is, while the days of the week are determined by the relentless march of time, regardless of what the bank does.

That relationship, however, is about causation; the question here is about conditional probability. If I wake up after a long sleep and look at the clear blue sky, all seven days of the week are equally likely. But if I wake up and notice that the bank is open, then the probability of "it's Saturday" drops to zero, while the probability of "it's Monday" goes from one-seventh to one-fifth. Or a little less than one-fifth, because bank holidays are more likely on Mondays.

The bank doesn't cause the days of the week to change; but information about what the bank is doing gives me information about the days of the week.

coincidences

Second, and more salient, is the idea that $A$ and $B$ might be intimately related to each other, but that the probabilities might appear independent by some coincidence. For example, suppose that the $A$ is "caused by" $B$, in the manner of the bank operation days above, but in a parametric way: $A = A(t)$, and $B=B(t)$. Perhaps there are some values of $t$ where $B$ makes $A$ more likely, $P(A|B)>P(A)$, and other values where the converse is true. If everything involved behaves continuously in $t$, some mean-value-theorem-type argument would say we have to pass through $P(A(t_0)|B(t_0)) = P(A(t_0))$. Is that "independence"? Maybe. I would want to construct some examples to think about it.