Self-monitoring on the web
September 25th, 2012 by David Bradley >> No Comments
We are repeatedly warned by the media about how web 2.0, social networking sites, search engines and countless apps are compromising our privacy, hoarding our personal data, and tracking our browsing habits on the web. Moreover, anyone who has done a vanity search for themselves personally or for a company brand will know only too well how much garbage can accumulate on the web out there. The maxim: “If you don’t want it on the Internet, don’t put it on the Internet”, certainly still holds true, but a lot of the stuff about you, your company or organisation may be on the internet not because you specifically put it there but because it was somehow scraped from your online activities.
A team based at the Institute of Software Technology and Interactive Systems, at Vienna University of Technology and SBA-Research, also in Vienna, Austria, explains how web 2.0 has changed the online landscape, from a flat, fairly one-sided state to a two- and even three-way interaction in which users are no longer passive consumers of data, whether that is text, images, movies or sounds, but creators and commentators, sharers and socialisers.
The researchers point out that this brings with it new responsibilities on all sides:
“The disclosure of personal/organizational information in Web 2.0 via social networks, digital contributions and data feeds has created new security and privacy challenges,” they explain. “Designing transparent, usable systems in support of personal privacy, security, and trust, requires advanced knowledge retrieval techniques that can support information sharing processes by applying appropriate policies.”
As such, they are developing a system that allows the “stakeholders”, you and I and the companies and organisation with which we interact to extract, analyze and visualize the data about those stakeholders and the connections between them. Most importantly, from the personal privacy perspective, the system will allow users to carry out a more definitive “vanity search” to allow them to monitor their activity on the internet and to reveal the inferences about the behavior and character that might be derived from that same data by third parties with marketing or even malicious intent.
The team has devised a five-step plan that could be readily implemented in software to extract both data and inferences:
- The data will be extracted from social web platforms either by the API for the target platform or via a dedicated extractor component.
- Text analysis techniques such as WD are applied to the text to disambiguate and annotate the text with useful semantic information.
- Self Organizing Maps are used to visualize the results and give the user an overview of their social network context.
- Quality measures are applied to ascertain the high-quality entries that have the potential for being used as template for assistive services.
- Another outcome of the self-organizing maps are self-monitoring results obtained by applying the user’s ethical requirements to highlight points of interest in a given context.
The team reports preliminary tests on Twitter and Facebook users, displaying visualizations that show how such an analysis can reveal topic “trouble spots” “dangerous” friends. They are currently unifying the various processes to allow them to develop an app that individuals could use to monitor their presence on the web and potentially improve their public image. The same system might be used for improving a brand in the broadest sense whether commercial, political or for a not-for-profit organization.