Trust is relative, ranging from not trusting at all, to the complete trust that arises based on commonly shared expectations of honest cooperative behaviour. Francis Fukuyama, a political scientist, first defined trust in this way in his book Trust, 1995. The very fact that he could write a whole book about trust shows that it is a complex subject!
Whenever we use a piece of software, we are trusting that the code will do what it is supposed to do and not attack our systems. So we are trusting the company that produced the software to behave in an ethical way and sell us code that does what it is supposed to do and nothing else. Whenever we outsource our IT to a specialised company, we trust that they will not delve into the information within our files. We trust that a security mechanism will protect data, not put it at risk, because the creators or vendors of that mechanism behave in accordance with accepted ethical standards. We also trust that the mechanism will protect us against bad actors who will violate those shared standards for their own ends.
A security policy is based on levels of trust and states what we expect of our teams in terms of their behaviour. However, when we apply a mechanism to protect a system, we are creating a trust boundary. Access to the system is only available to people we trust, and others are barred. Indeed all security decisions are based on trust decisions, and most companies only give access where it is needed for a team member to carry out their job. Partly this is to protect confidential data as a result of compliance with legislation. However it is also to prevent staff from accidentally (or deliberately) sabotaging data.
Diana Catton MBA – by line and other articles