While working on a Hoare logic for probabilistic programs [1], we encountered an interesting choice for how to define the operational semantics of the underlying probabilistic imperative language *PRIMP*. We could either define commands as mapping from states to states and then “lift” those commands to apply to distributions, or define those commands over state distributions as a whole. Consider, for example, the conditional `if b then c1 else c2`. We could either apply this to all the states in the distribution’s support, where `b` would evaluate to `true` or `false`, or we could apply `c1` to the entire sub-distribution where `b` was true and `c2` to the complementary sub-distribution. We showed in our Coq development that both approaches were equivalent. Interestingly, when we looked at the non-deterministic choice `c1 ⊔ c2` this wasn’t true at all. If we reduced `c1 ⊔ c2` to `c1` or `c2` for the whole distribution, we restricted the non-determinism from reacting to the random events that had already occurred. And this allowed us to verify many properties that wouldn’t hold if we reduced `c1 ⊔ c2` at the state level.

We soon realized that the distinction between these evaluation strategies corresponded to the distinction between the Arthur-Merlin (*AM*) and Interactive Polynomial (*IP*) complexity classes. *AM* uses public coins, meaning that the adversary knows the outcome of every random event. By contrast, in *IP* the hidden coins are used to thwart the prover, as in the simple protocol for verifying that two graphs are non-isomorphic by asking the prover to identify which of the graphs was randomly selected.

It turns out that there are a variety of interesting operational semantics for programs combining probability and non-determinism (we’ve identified eight so far) which are appropriate for domains ranging from security to game theory. All of these can be thought of as encoding what information is available to a non-deterministic adversary. We identified a few of these models in a recent ICFP poster competition.

Which raises two questions: Which models are worth exploring in detail and how many are we missing?

Models for Probabilistic Programs with an Adversary: Extended Abstract, Slides

*Robert Rand and Steve Zdancewic, University of Pennsylvania*

[1] Rand, R. and S. Zdancewic, “VPHL: A verified partial-correctness logic for probabilistic programs”, Mathematical Foundations of Programming Semantics, 2015.

You’ll have to explain to me in Florida your claim that “if b then c else c” is different from simply “c”, even to an adversary who knows the program structure but is ignorant of any runtime behaviour. I suppose if an adversary can see the program source, then syntactically different programs are indeed different; but then you have no program equivalences at all, right? “skip ; skip” is different from “skip”. But that doesn’t seem like a very interesting theory!

I’m can briefly give an illustration here, though I’d be glad to chat in Florida as well.

Consider the following program:

b = 0 oplus b = 1;

if b then (x = 0) U (x = 1) else (x = 0) U (x = 1)

It’s actually quite difficult in this case to ensure that the adversary’s choices are identical. If the programs in each branch are distinct but equivalent, we would need to do complex (in principle, undecidable) program analysis to guarantee that the adversary acts identically in each branch.

This program allows the adversary to guarantee that (b xor x) = 1.

The core idea of the model where the adversary is ignorant of program behavior is that in the similar program

(b = 0 oplus b = 1); (x = 0 U x = 1)

the adversary cannot guarantee that (b xor x) = 1 (in fact, the probability remains 1/2).

Hence “if b then c else c” isn’t equivalent to “c”. But skip; skip is still equivalent to skip according to our semantics.

Ta. I’ll have to think about that… and maybe discuss with you in St Pete.

Pingback: A Note on “Disjunctive” Predicates » pps2016