Mapping the Public Media Stack: what we did and how you can use it
By Jay Owens
In order to offer a guide to public media organizations on media technologies – and the potential to make more ethical and sustainable decisions about which to use – we needed to do two things: identify a comprehensive list of these technologies, and then review them.
Following a workshop in New York in May 2019 – where over 50 people working in public media and public media tech came together to explore the idea of a Public Media Stack – we had an exciting list of ideas and technologies to consider. The next step was to get from these hypotheses to a comparable dataset.
How do you do this? You need to create structure. And this has been my focus as the project’s research design lead.
Stage 1. Questionnaire design
I made a single, comprehensive list of all the questions, issues and topics raised at the New York workshop – then interrogated these questions to develop them into a research questionnaire.
Key considerations were:
- We needed to ask closed questions (ones with yes/no answers, or a range of options from a list) so that the answers would be a structured dataset that we could chart and summarize – not just a series of text strings.
- We needed to ask objective questions, not opinion-based ones – questions that different analysts would answer in the same way.
- We needed clear sources. Questions were designed to be answered from information found on the product website or in news coverage, rather than requiring first-hand user experience.
- We needed to know why we were asking these questions. I mapped each question to the type(s) of risk it helped publishers avoid – e.g. knowing what drives higher pricing helps reduce the financial risk of buying a tool.
In this way, some initial concerns around instances of media industry ineptitude, or questionable tool use, became rather more dialled-down inquiries into the competitive position of each tool, availability of open-source alternatives, and lock-in factors like data export, deletion, and proprietary file formats.
It’s obviously crucial to give space to more qualitative values questions – but these are addressed in our expert essays, the interviews with media organizations about their experiences with their tech stacks, and potential user reviews to come.
Stage 2. Research process
In the interest of going beyond Google ourselves, we used cloud-based spreadsheet/database tool Airtable to build both a form for data entry, and store the answers. Tech journalists Martin Bryant and Imran Ali worked through the coding in December 2019 and January 2020. We had a total of 36 questions, which I had initially thought might be too many – but the easy-to-use form and the closed question format meant that they could be answered in around one hour per tool.
Once coding was complete, I ran some data quality checks to ensure classifications were consistent, then charted the data to get a read on patterns before applying the scoring. At this point we had to switch to a Microsoft stack, for ease of use of Excel pivot tables and Powerpoint charting.
Stage 3. Scoring
The questions had already been designed to assess potential risk areas for publishers – so scoring was a straightforward matter of assigning ‘risk points’ for particular answers. We’ve included the scoring table below so you can see how we allocated risk points to each question.
- 22 of the 36 questions were scorable
- Up to two points were given per ‘risk’ answer – with one point for answers that carried a lower level, and zero for non-risk answers
- ‘Cannot tell’ answers were scored as risks, as lack of information available pre-purchase increases the risk of making suboptimal decisions
- Totals were summed for each of the four sections of the questionnaire (Strategy, Financial, Technical, and Data, Security & Ethics), and calculated as a percentage of the maximum score possible
- Each section was weighted to account for a quarter of the final score
Final scores ranged from 0% risk points for the best scoring tool (a very well-documented collaboration product) to 62% for the worst (also, as it happens, a collaboration tool).
However, percentage scores felt artificially precise, and even misleading to communicate. A tool scoring 13% might not be riskier than one scoring 12%. Instead we thought it would be more useful to indicate that both were low scorers overall, and among the least risky.
We then grouped the products by quartile, with 28 products in each category, and the bottom quartile broken into two groups depending on information available:
- Lowest risk (risk scores 0-16%)
- Small risk (risk scores 16-23%)
- Some risk (risk scores 25-39%)
- High risk (scores >40%) – 17 products
- Lacks info (where >30% of scored answers were ‘cannot tell’) – 11 products
Interpreting this information
This research is essentially a first round of due diligence on tools you might consider using in your media technology stack. Tools we’ve flagged as ‘lowest risk’ are unlikely to catch you out: their pricing is transparent, they have clear data policies, they’re typically regularly updated with new feature releases, and most have open source alternatives. The ‘small risk’ set is also likely to provide sound choices; but perhaps share a bit less information in one or two areas, an ambiguity that’s raised their risk scores slightly.
Tools flagged as ‘some risk’ and ‘high risk’ could still be a fit for your organization – but we’d recommend robust research before you make that decision. They may be better suited for more experienced and technically resourced media projects: for example, they’re likely to need expertise or developers to install. We’ve flagged the areas where their risk scores were higher, as pointers to ask sales reps for additional information if you want to. And it could be helpful to speak with other users and read reviews (e.g. on G2Crowd.com) before you decide.
Some of the questions we asked didn’t easily fit a scoring framework – but it’s information that’s nonetheless very useful to share. We’ve summarized these factors on each product profile, so, at a glance, you can understand skill needs and factors affecting pricing.
Key findings
- The skill levels required to set up and use media tools increase as you move through the workflow. 83% of collaboration tools are ‘plug and play’ – but for Publishing, Measurement, Audience, and Storage, you’re likely to need either a tool or domain expert, or (for the latter two especially) a software developer.
- Despite GDPR, about a third of tools are lacking in data policy and data control. 38% of tools don’t have clear GDPR information – with Collaboration, Audience, and Publishing tools the most lacking, which can increase the challenges for media projects in managing your own data policies too. And around a third report only partial or no ability to export or delete your data (or your audience’s, if applicable). This applies to Publishing tools particularly.
- Ensuring mission and values alignment with tech suppliers isn’t easy: 63% of products don’t have an accessibility statement on their websites, and 68% don’t share workforce diversity data. Including these factors in RFPs and tenders could help shape industry norms for the better.
- Pricing clarity varies wildly. 80% of Collaboration and Production tools either have very clear pricing information online, or are clearly free. By contrast, three in four Measurement and Audience tools are either partly clear, or not clear at all. Pricing ad spend on the big social media platforms is hard to judge: you can control campaign pricing closely, but there aren’t benchmarks for individual publishers to judge typical or optimal campaign costs.
- Five products in the top 10 were from Microsoft, with very low risk scores of 5% or less. Microsoft win on clear documentation: from product roadmaps to pricing, they lay it all out commendably clearly.
What you can do with this information
Above all, we hope this structured review can save many media projects a lot of time. You can use this database in at least three ways:
- A comprehensive list of the main tools at each stage of the stack. Several rounds of expert input and review should mean that we’ve got all the main contenders – though we are almost certainly missing some of the smaller open source and brand new startup options. Use these lists to ensure your own longlists are comprehensive.
- Shortlisting which tools you want to explore further. Use these rankings and the detailed tool information provided to prioritize which tools are most likely to fit your needs, so you only have to explore a few in depth.
- Identifying key due diligence questions that you need to ask. It can be tricky to know you’ve covered every question you need to, particularly regarding business or technical problems you haven’t encountered before. Use our expert-generated list to feel confident that you’re covering every area of risk.
We sincerely hope you find this analysis useful. Let us know how you’re using it over the next year, which information was most useful, and whether there’s anything you’d like us to add. Either tweet us at @storythings #publicmediastack or email you feedback here.
Jay Owens
Research and Framework Designer for the Public Media Stack