The Small Data Forum podcast was created spontaneously and almost accidentally after your three co-hosts met on a panel at a media industry event in 2016, a few weeks before the EU Referendum.
After a lively debate featuring sometimes radically-divergent views to keep our audience entertained well past the scheduled end time, seasoned podcaster Neville Hobson suggested to podcast ingenus Thomas Stoeckle and Sam Knowles that our ramblechats might work rather well in pod land.
Who were we to argue?
And so it came to pass – with Thomas’ wry titling – that the Small Data Forum came into being, with the inaugural episode dropping on 14 June 2016. Since then, we’ve taken a more-or-less-monthly, sideways look at the uses and abuses of data big and Small in politics, business, and public life.
Timing is everything
Timing they say – in the business that is “show” – is everything. How could we have anticipated the volume of data shenanigans and chicanery that would come in the intervening almost seven years?
Brexit, Trump (“Brexit plus, plus, plus” according to POTUS 45), GDPR, Cambridge Analytica, Johnson and Cummings, COVID, the Metaverse, Partygate, Sue Gray, Putin x Ukraine, rampant inflation and interest rates, the cost-of-living crisis … the uses and abuses of data in our fractured, fractious world just keep on coming.
At the Small Data Forum Christmas party 2019 – a not-so-sober affair at Olivelli, our favourite pizzeria in Waterloo, South East London – we decided we were going to head to Thomas’ future permanent home in Andalucía for a bumper weekend of podcast recordings.
We booked our tickets and set our trip up for April 2020. In the immortal words of broadcasting legend Danny Baker when his football team, Millwall, go 2-0 up: “What could possibly go wrong?” (For those who don’t know football or Millwall, they are the Dominic Cummings of football, with one of their favourite chants being: “We are Millwall, we are Millwall, no-one likes us, we don’t care!”).
By late January 2020, there was some anxiety about what may or may not have leaked from a lab or a wet market in the Chinese city of Wuhan. By the end of February, anxiety had morphed into profound doubts that our pod-tourism trip could go ahead. And by 23 March, we were locked down and completely incapable of making our pilgrimage to Southern Spain for the foreseeable future.
Almost three years on, with the spectre of COVID receding into the background – at least for now – we three finally made it out to Andalucía. Fuelled by pizza, beer, and Rioja the night before from Ríogordo’s finest eatery, 30 Y Tantos, and copious vats of coffee first thing, we three recorded our first overseas episode ever. Always keen to try something new, we attempt to focus on a single issue. Sam set the discussion up.
The unholy triptych – attention, mental health, gerrymandering
In the past 25 years, the world wide web and social media networks – resting, as Neville reminds us – on top of the internet, have produced three fairly seismically bad outcomes:
- An advertising-funded ecosystem, free at the point of use with platforms competing for our attention. In Jaron Lanier’s oft-repeated maxim, “if it’s free, you are the product”
- A pervasive mental health crisis, particularly experienced by younger, always-on users. Though as Anastasia Dedyukhina shows eloquently in her 2018 book, Homo Distractus, this is a transgenerational crisis
- Political gerrymandering and malign influence, from Cummings, Vote Leave, and Brexit to Trump, Bolsinaro, and Orbán
That unhealthy triptych seems – to Sam at least – to be net negative.
As we’ve discussed in recent episodes, the online world sits at a crossroads – and potentially a fairly exciting one at that – with the rampant and sudden onrush of AI thrilling much of the world (with the exception of the dodo-like PR industry).
With Tony Blair’s hand of history resting on our shoulders, Sam poses the question for debate: “What guardrails, if any, should be put in place by whom, so that we can avoid history repeating itself?” How do we prevent the possibilities offered by AI – artificial and augmented intelligence – from being another www.clusterf.uck?
Before the debate proper begins, Sam summons the spirit of the British comedian, Dave Gorman, who once made a show and A Thing out of the Googlewhack – a one-word search on Google that only produces one search result. In ancient philological terms, a Googlewhack was known as a hapax legomenon – “a term of which only one instance of use is recorded”. And the Googlewhack-hapax-legomenon in question, which brilliantly defines we three stooges?
The Podnosticators.

Evidence of that rarest of birds: the Googlewhack
Thomas fesses up straightaway that he is more likely than Neville to be Mr Guardrails, more in favour of government regulation than allowing neoliberal market econometricians to self-regulate. AI needs – for Thomas – much more government oversight than the three Dubyas ever had.
The wild, wild West
Neville’s in favour of guardrails, but is less convinced than Thomas that governments can play a meaningful or effective role. He balances the issue or regulation with freedom – freedom of expression, freedom of commercial opportunity.
Looking at AI topics on Techmeme, for Neville AI today is very much the Wild West. And things are very different in 2023 – with more than half the planet’s eight billion inhabitants online today vs the few hundreds of thousands who were, in government departments and academia, at the time of the www-explosion in the 1990s.
For Neville, it’s not governments’ role to regulate – it’s everyone’s responsibility.
Thomas remains unconvinced. What’s happening now is not working. He’s also keen to wind the clock back further than Sam’s suggested 25 years, doubling the perspective to 50 years and more.
Not for the last time this episode, Thomas summons the work of Mariana Mazzucato, Professor in the Economics of Innovation and Public Value at University College, London. In her book The Entrepreneurial State, Mazzucato assesses the extent to which innovations have been generated by private enterprise or government.
Following the Sputnik Shock and the U.S. falling behind in the Space Race to the systemically-underfunded Soviet Union, successive U.S. administrations pumped billions into state-funded innovation, via DARPA and Stanford – particularly it’s d (or design) school. This investment led to the forerunner of the internet, ARPAnet, as well as Windows, touchscreens, and GPS. No state funding, no Jobs, Gates, or Zuckster. The Silicon Valley economic miracle is revealed – by Mazzucato and others – to be more of a Silly Con Valley.
Information overload
Neville is reminded of Alvin Toffler’s Future Shock, the book he wrote with his wife Adelaide Farrell in 1970 that first identified the psychological state into which individuals and societies were falling as a direct result of information overload. Indeed, they even coined that very term.
Thomas goes even further back to the future, pointing to the nineteenth century creation of the advertiser-funded media model with newspaper advertising, and the foundation of the BBC – an institution whose 101-year history may well now be in the balance because of reactions to football commentator Gary Lineker and his political musings – though we’ll come back to that in another episode of the Ríogiordo Sessions.
Thomas brings us a bit more up-to-date, with a look at commercial TV regulation in the U.S. In 1949, the country enshrined the Fairness Doctrine into law, with licensed broadcasters such as ABC, CBS, and NBC all required – like the U.K.’s BBC – to be balanced in their output. This was swept away in 1987 under Ronald Reagan, who made the Doctrine disappear.
This led directly to the rise and rise of Ted Turner, CNN, shock-jock hacks from Rush Limbaugh to Sean Hannity, and ultimately Fox News.
This change was exacerbated by the 2010 Supreme Court ruling on Citizens United, opening the floodgates for political funding and Big Money’s communications and campaign funding. Candidly, Thomas sees these moves as central in transforming the modern political landscape into Dante’s Inferno.
Inferno, purgatorio, paradiso?
For Sam, the route out of the Inferno – into an initial period of Purgatorio and on to Paradiso is most definitely NOT self-regulation.
Having worked in the booze industry’s “enlightened self-interest group” in the 1990s – the U.K.’s self-regulatory Portman Group funded by the international drinks industry – he’s all too aware of the knots in which so-called “sin industries” can tie themselves to avoid and evade regulation by putting on a good show of self-regulation.
Neville cites the splendid film (and book) Thank You For Smoking as more case study than fiction, and we nod and “Mmmmm” our accord.
Sam well recalls his Portman self being compared to “Dracula curating a blood bank” and “your pet cat looking after the goldfish when you go on holiday” by anti-alcohol industry campaigners.
Self-regulation works as a smokescreen – as a Monty Python-style “confuse-a-cat” diversionary tactic – and it can work for many years. But in terms of effecting actual change, it’s regulation and not self-regulation that works. “How many more Molly Russells do we need?” he asks rhetorically as we Podnosticators three consider how to keep social media platforms in check.
W(h)ither Online Safety?
Neville brings up the U.K. Government’s endlessly-delayed Online Safety Bill, going through yet more rounds of filibustering tweaks, but still nowhere near at the priority level it needs to be to begin to start regulating the industry; to begin to start tackling the epidemic of mental health foisted on those addicted to the dopamine hit of chasing another like, retweet, or favourite.
The current delay is caused by the Department of Culture, Media, and also Sport – thank heavens now clear of Nadine Dorries – looking to get a back door into WhatsApp’s end-to-end encryption. Thomas muses a Blairite Third Way – an Aristotelian middle way – but concludes that this is probably a non-starter. He cites the raft of fake or smokescreen ESG reporting from global corporates as a case in point. Self-regulation breeds lipservice.
Sam diverts us back to our debate on AI and cites an article from the Times Higher by Colm O’Shea of New York University.
O’Shea argues that the very nature of generative AI engines such as ChatGPT produces convergent rather than divergent thinking answers to questions, narrowing rather than opening up responses, giving simple gist rather than vivid explanations, bland syntheses rather than insightful joinings together in new and innovative ways. While we may still have time to set regulatory frameworks for AI, it may not yet be troubling the scorers in terms of genuine human creativity after all.
Thomas broadens our perspective by comparing AI to electricity. It’s not a simple, monolithic entity than can be simply figured out, considering all the things it might potentially be able to do; that’s both AI AND electricity.
But he also draws our musings towards a conclusion, by recommending that AI tools should be subject to the same kinds of rules and regulations that govern novel medicines and pharmaceutical products – by the EMA, NICE, and FDA; the European Medicines Agency in Europe, the National Institute for Clinical Excellence in the U.K., and the Food & Drug Administration in the U.S.
What if – Thomas asks – we could institute trials, checks, and balances on the safety and potential of new tech before they’re unleashed as experiments on the world?
Regulation vs self-regulation
Neville protests that our thinking is limited by the legacy of geography, and that Governments cannot regulate entities which have no truck for national borders.
Thomas brings us back to Mariana Mazzucato and her most recent book, Big Con, which charts the rise and rise of convergent-thinking consultancies, whose singular mindset governs the way the world does business and tries to regulate, directed by the rationality of homo oeconomicus, STEM subjects, logic, and reasoning.
Not for the first (or the last) time in Small Data Forum history, Sam points us in the direction of the Princeton psychologist, Daniel Kahneman, whose Nobel Prize-winning work with multiple collaborators has shown beyond doubt that we make our decisions emotionally. We use the evolutionarily-ancient parts of the brain that have no access to data, facts, logic, or words to make our decisions. We only go on to justify them rationally, slowly, using the uniquely human cerebral cortices.
Have we three Podnosticators answered the question? Almost certainly not. But in the Southern Spanish sun, we’ve certainly given it a damned good go.
Listen to Episode 67: