08: The Facebook files, machine learning and fake news continue

Such is the nature of our continuously evolving subject matter, when we have completed a podcast, we often find that on key discussion points, there is already more, and breaking news.

Just days after recording our latest musings last week on the challenges and opportunities surrounding big-small data, with multiple references to Facebook and their attempts to counter fake news, the Guardian broke the story of the Facebook Files, based on more than 100 internal training manuals, spreadsheets and flowcharts outlining the rules and guidelines for what its two billion users can and can’t post on the site.

This will add to the global debate about the role and ethics of Facebook and other social media platforms in public discourse. And without a doubt it will be a key topic of future SDF podcasts.


In this latest episode 8, we explore whether following the ‘interesting times’ of 2016, we are seeing some sort of ‘regression to the mean’, with a right wing populist not succeeding in general elections in the Netherlands, with Emmanuel Macron becoming president of France, and with Chelsea FC winning the Premier League (following Leicester City’s surprise success last year).

Machine learning and managing change

Despite returning to more predictable outcomes of major events, it is not all ‘back to normal’, on the contrary as Sam argues in the context of our discussion of a recent study of attitudes towards machine learning and artificial intelligence by the Royal Society.

When it comes to permanent change driven by technological advancement, the genie is out of the bottle and it is too late to resist change. The choice, as he quotes Robert Browning, is whether we “greet the unseen with a cheer”, or whether fear of change will dominate how we deal with inevitable change.

Neville has argued for a long time now that AI done right represents augmented, rather than artificial intelligence. It is about enhancing, rather than replacing, human capabilities – with the exception of highly repetitive and onerous tasks that lend themselves to automation and thus free up humans for higher level activities.

The Royal society study raises the question of how we will ensure that society as a whole can benefit from productivity dividends. Given the multitude of viewpoints and predictions, Neville is cautious: “all we know is what we think we would like it to be like”. We all agree that we need to learn to live with more uncertainty as many of the jobs that future generations will do (and how they will do them) have not been invented yet.

What people don’t understand, scares them or at least makes them sceptical. Thus it is significant that only 9% of respondents to the Royal Society study about machine learning actually recognise and understand the phrase.

The results might not be so different if people were asked to define what an algorithm is (in the simplest of terms, defined in a non-IT way, it is a sequence of instructions or a set of rules that are followed to complete a task).

Facebook, fake news and platforms as publishers

The discussion of algorithms in the context of social media focuses on the fact that these instructions are set, tuned and adjusted to serve the commercial needs of the big Silicon Valley firms. Robert Thompson, CEO of News Corp. and former editor of The Times, published a strongly worded editorial in the Times on 10th April where he claimed that “the two most powerful news publishers in human history have created an ecosystem that is dysfunctional and socially destructive”.

Google and Facebook now have about two third of the global digital advertising market, and account for 90% of incremental growth over the last year. Thompson criticised the privatising gains and socialising costs as both companies are great at monetising, but not at monitoring.

Thompson described Authenticated Authenticity – provenance, accuracy, reality – as an asset of increasing value, to understand the ebb and flow of humanity. His company News Corp. launched its own programmatic advertising offering in April, whilst at the same time championing brand safety. Listen to Sam’s passionate point about the inherent conflict of interest there.

Sam is also skeptical about calls for the likes of Google and Facebook to fund quality journalism (as put forward in The Platform Press by the The Tow Center for Digital Journalism at Columbia’s Graduate School of Journalism).  He sees more potential in the voluntary membership scheme that Wikipedia is practising and that has now also been introduced by the Guardian.

Conclusion

In his closing remarks, Neville makes a passionate plea that it is the role of communicators to promote the narrative and imagery of augmented intelligence to help people cope with change. In order for that narrative to connect with its audience, it requires trust – and that raises the question as to who we trust to provide accurate information.

With regards to news media and the challenges of fake news (and its consequences , such as the gradual erosion of trust), there is a trend to trust the sharer of a story, rather than the source or publisher.

Our general, rather broad advice is to question everything. More specifically, seek out sources with a proven track record that you can trust, with authors that link to things they state – to allow verification.

Or as the Roman poet Horace put it some 2,000 years ago: Sapere aude. Dare to know.

Leave a Reply