What are the biggest decisions people working in public media tech have to make? If there was a decision they could change, what would it be?
In this new fortnightly webinar series from the Public Media Stack, we explore the art (and science) of making tech decisions with some of public media’s prominent voices. We’ll ask them five simple questions and unpack the issues that universally affect our industry – problems with workflow, the relationship between newsrooms and tech teams, reader engagement innovation, funding diversification – as well as the presence of and reliance on Big Tech and the efforts to create alternative ecosystems.
For our third Building The Stack podcast, we were joined by Noelle Silver, a multi-award-winning technologist who specializes in conversational AI and Voice Development, Intelligent Apps and Agents, and Responsible AI Practices. She has lead teams at NPR, Microsoft and Amazon, and is the Founder of Women in AI and the AI Leadership Institute. She is currently Head of Instruction, Data Science, Analytics and Full Stack Web Development at HackerU, where she works with universities and leads a team of engineers to create innovative tech education programs to assist students into a career in the tech industry.
On building a product beyond the financial incentive:
‘A platform like Alexa is scaled so that it can provide the right answer to the right person in the right location. It’s expensive to offer that as a service, as an organization, so in the early days Alexa had to decide if it was something worth going after, but over time it really did prove it was the right thing to do. It’s way more work than somebody might typically do if they were just interested in building a financially successful product. It shifts your mission as an organization when you realise people are using [your product] to make some life and death decisions.’
On utilizing AI models to enhance learning:
‘Right now I’m building an experiential learning platform that ties directly to jobs. If you’re going to be learning from home, how do you start to leverage conversational AI and artificial intelligence and different AI models into that learning experience? Online learning is tough. I have two teenagers, one with special needs. How can I use data on how they’re watching, what they’re engaging with, what quizzes they’re doing, how many minutes of a video they watch, to create a trajectory of their success? As soon as I notice they’re not doing the work the way they used to, how can I have an AI bot chime in and say ‘hey I noticed you didn’t finish this, go try this fun project’? How can we use artificial intelligence to choreograph an experience for someone online? The tech exists. We just don’t have this happening for our students. How can we use AI to stop people from just quitting when they’re doing something hard?’
On deciding to speak up:
‘Earlier in my career, when I saw something that wasn’t right I very rarely said anything. I never complained about bad or unethical behaviour. With the younger generations, I’m starting to see a change in that. If you don’t say anything, the thing is people don’t realise that something is wrong, a company doesn’t realise that it’s wrong, especially with ethics and AI and in my world. Most of us build [something] because we’re a bunch of people [who] all look the same, talk the same and ask the same questions – I had a diverse opinion but I was one of very few – so when we released a product it was no surprise that it only really served that demographic that the developers happened to look like. The human part of our technology is probably the most challenging. How can we create communities where your voice matters?’
On the tech Noelle would most like invented:
‘I picked up an old anthology of time travel stories from all the great science fiction writers. What I immediately thought of was the concept of the work I started at Alexa, a contextual assistant. What would it be like if all devices [Siri, CarPlay, Alexa] were connected to a consistent infrastructure? So that if I wanted to talk to my bank, I could start that conversation in my kitchen, and then when I’m outside I could lift up my phone and continue the conversation – you’re not going to ask me to start over every time I go to a new platform or even worse, the same platform when I ask for another version of the same thing. That’s what I would love.’
On data and ethics of storage:
‘I can imagine there becoming a middle layer – a company, person, or software – that sits in the middle and says I will protect your data. What if they become the steward? We’ll have this trusted relationship and I, as your trusted steward, will divvy out your data and keep track of it. If [only] people realised the cost of convenience is way higher than they think. Ethically, maybe we want to create a little bit more of a push towards some process where you’re managing your data as an individual. Because as you start checking the boxes on user agreements, you really don’t understand what you’re saying yes to anymore; now that data that you’re contributing to could be used to do anything but you graciously gave this away. I think there is room for a data stewardship organization that gives users a tool for reducing the risk of having all of these one to one relationships with all these different organizations that need their data.’
Advice for making tech decisions:
‘Don’t make decisions in a vacuum. Make sure you have representation at your table. All of their feedback will feed to different constituencies, especially when building products at scale. And understanding and knowing that every tech decision you make should use quantitative and qualitative data together. Every decision you make should be a data decision, we’re collecting data on virtually anything. Obviously use ethical ways of collecting this data, but having a diverse and inclusive perspective of people at the table, combined with qualitative and quantitative data, I feel this sets you up for the highest level of success.’