We all worry (a bit) about data security. But many of us at the same time use Oyster to get around London, or sign up to supermarket loyalty cards. Both can provide data about our habits. What’s the difference between our favorite high st brands and the big beasts we fear will prowl the Internet pouncing on the details of our lives?
Trust is of course at the root of it. We trust that our supermarkets don’t want to stray across a big red line by making unacceptable use of our shopping data.
We need to build that trust into the digital world, especially as we enter the era of Big Data, with very large numbers of sensors operating all around us.
And the problem is that at the heart of the Big Data world is the assumption that data and its uses will drive the revolution. Sensors will only have value to the extent that the data they produce has value, whether it be a public good like cleaner cities, better integrated transport, or improved health for all. Or it might be private data, used in combination to help personalize services to companies and consumers, so that they get better information, and, yes, better advertising.
If this revolution is to succeed, and bring all the benefits associated with it, the public must have trust both in how their data is protected from unlawful hacking and in how their data is used by those who collect it.
techUK has been looking at this and is working on some draft Trust principles for an IoT world. Key to this will be the encouragement of maximum transparency about how data is used. Hardly anyone one reads Terms and Conditions. But instead of implicitly saying ‘read the small print’, we may want to be upfront about the need for clarity and transparency.
This will be the key to building the sort of trust which we take for granted in other areas of our lives.