A software organization wants to make a promise, for example about its data practices. For example, “We don’t store information on your location”. They can keep that promise in two ways: code or policy.
If they were keeping it in code, they would need to be open source, and would simply make sure the code didn’t transmit location information to the server. Anyone can review the code and confirm that the promise is being kept. (It’s sometimes technically possible for the company to publish source code that does one thing, and binaries which do another, but if that was spotted, there would be major reputational damage.)
Geeks like promises kept in code. They can’t be worked around using ambiguities in English, and they can’t be changed without the user’s consent (to a software upgrade). I suspect many geeks think of them as superior to promises kept in policy – “that’s what they _say_, but who knows?”. This impression is reinforced when companies are caught sticking to the letter but not the spirit of their policies.
But some promises can’t be kept in code. For example, you can’t simply not send the user’s IP address, which normally gives coarse location information, when making a web request. More complex or time-bound promises (“we will only store your information for two weeks”) also require policy by their nature. Policy is also more flexible, and using a policy promise rather than a code promise can speed time-to-market due to reduced software complexity and increased ability to iterate.
Question: is this distinction, about where to keep your promises, useful when designing new features?
Question: is it reasonable or misguided for geeks to prefer promises kept in code?
Question: if Mozilla or its partners are using promises kept in policy for e.g. a web service, how can we increase user confidence that such a policy is being followed?
I prefer to have promises kept in _both_. These days code often updates frequently; in the case of Firefox, with an in-house update solution, but things on mobile devices are often done using the platform’s auto-update solution (app stores). Sometimes things are really wrappers around websites and the code is transient and might update at any time.
I don’t think you need to invent new terminology (i.e. “promises kept in code) for this. I think it is just a case of “actions speak louder than words”. So that if it can be checked (in code) that you don’t/can’t do something, it is more likely that you don’t do it, than if you just say it.
Also, if a company say “we don’t track users locations”, are later found do have and perhaps use that capability, and then say “it is only for debug” or “it is only for choosing a download site, it is not stored”, they seem untrustworthy, because of the difference between what they say and what they do. And that tend to reflects on other promises made.
Like I said, engineers prefer promises made in code :-)
Is that saying limited to engineers?
policy regressed here, code hasnt, well, probably. sometimes its hard to find it in the code, too.
engineers prefer in code because they understand what that means. they understand that code is precise and represent exactly what is being done – the compiler interpret and the machine execute the code in a single, repetitive, precise way.
its not because “they’re engineers”. thats a side-effect. its because “they are wiser on the subject”.
Well, I wonder whether there are other professions (lawyers?) who would prefer policy to code…
“Promises in code” are definitely preferable, because that means designing a system such that breaking those promises is impossible – the information cannot be gathered at all, or if gathered, is at least never stored. It’s a lot like doing password handling – ideally, the actual password never leaves the user’s computer, so it should never be possible for that information to leak from the server.
As you say though, that doesn’t mean there’s no room for policy – some things cannot reasonably be implemented as technical constraints. But it’s better to implement technically where possible, because while such implementations have bugs that can leak information, bugs can be closed over time. But policy breaches are human errors, and you’ll never successfully close those off…
I agree with Mook: keep promises in both. Code and policy can both be easily changed, so that’s no guarantee of the future. But if you promise something and you code it that way, then it’s clear the promise is being kept.
If you just promise it and don’t code it that way, we can be suspicious that it’s not kept, because your promise and your code don’t match. If you code it that way, but don’t promise it in a policy, then we know you consider it subject to change whenever it’s convenient for it to change, as it’s just code like any other code. But when you do both, we know you’ve made a conscientious promise for the future and you’ve implemented it at least in the present.
Now of course *both* the code and the promise can change together, and that’s where the honor of the company comes in: an honorable company will keep the promises it makes (by keeping the code that implements it), whereas a dishonorable company will just change them both whenever convenient because it does not honor its promises.
s/the code and the promise can change together/the code and the policy can change together/