In the wake of the London terror attack, UK Prime Minister Theresa May
has lambasted internet companies with accusations of providing a safe space for
terrorism ideology.
A day after the London bridge and borough market attack, May
accused the firms of giving "this ideology the safe space it
needs to breed." She pressed for ‘international agreements that regulate
cyberspace to prevent the spread of extremist and terrorism planning.’
Her statements have pivoted the argument between digital privacy and
security to the fore, once again.
Tech companies like Facebook, Google, Twitter have denied these
assertions, stating that they are already engaged in concrete steps to prevent
and remove extremist content. According
to Facebook Director of Policy, Simon Milner ‘We want Facebook to be a hostile
environment for terrorists. Using a combination of technology and human review,
we work aggressively to remove terrorist content from our platform as soon as
we become aware of it — and if we become aware of an emergency involving
imminent harm to someone's safety, we notify law enforcement.’
Understandably, it is quite difficult for these online platforms to
moderate all information uploaded on their site due to the sheer volume. Some
critics have said that Theresa May's calls are dangerous, disproportionate, and
"intellectually lazy."
According to Business Insider, Facebook already prohibits content that
supports terrorist activity, letting users report potentially infringing
material to human moderators. It also uses some technical solutions, like
image-matching tech that checks new photos to see if they've already been
banned from the platform for promoting terrorism. It also reaches out to law
enforcement if it sees potential evidence of a forthcoming attack (or attempt
at human harm more generally). Google removes links to illegal content once notified. YouTube also takes
down inciting videos and bans accounts believed to be operated by agents of
foreign terrorist organisations. Twitter suspended 376,890 accounts in the six
months leading up to December 2016. 74% were detected via its internal tech,
and just 2% came from government requests.
Despite these measures, there have also been recent calls by British Home Secretary, Amber Rudd, for tech companies to "limit the use of end-to-end encryption".
However, critics say disabling encryption in popular apps will not deter
criminals as they could simply switch from one app to another, or create their
own messaging apps or worse still go further into the dark web. Instead, there
is no guarantee that if encryption is not removed, messages sent by law-abiding
citizens would become easy picking for criminal minds to intercept. Tim
Berners-Lee, inventor of the world wide web said, "Now I know that if
you're trying to catch terrorists it's really tempting to demand to be able to
break all that encryption but if you break that encryption then guess what - so
could other people and guess what - they may end up getting better at it than
you are’.
Encryption simply means messages can't be intercepted and decoded by
anyone else en route from one platform to another, including the companies
themselves and law enforcement. It's used in messaging services including
Facebook's, WhatsApp and Apple's iMessage, among others
It may be difficult to decide whether to surrender your privacy in order
for the government to conduct better surveillance and provide security. If that
is the solution, perhaps welcome. But what’s the guarantee that it will succeed
in rooting out terrorism and that the government and maybe criminals will not
use it to spy on you? That’s the debate
Tell us what you think. Would you rather choose your phone privacy over
surveillance and terrorism concerns?
No comments:
Post a Comment