Ethics, privacy and trust in Sky Live. Leading the Digital Ethics Network to embed responsible design into one of Sky's most technically complex and data-sensitive products.
01 · Outcome
The work resulted in 3 actionable asks delivered directly to the directors board, a set of current and future ethical initiatives shaping Sky Live's approach to privacy, AI, and responsible design, and a reusable Digital Ethics Framework adopted across teams. It moved ethics from a theoretical conversation to a practical, embedded part of how Sky Live was designed and shipped.
02 · Context
Sky Live is a camera-first product operating in people's living rooms, processing video, audio, and biometric data to power real-time experiences. The ethical stakes are unusually high: trust, privacy, and responsible AI aren't optional considerations, they are foundational to whether customers adopt the product at all.
Research indicated customers didn't yet fully trust Sky on privacy and security. Consent wasn't always informed, and it was unclear how much customers understood about what the service did with their data.
The ability to innovate depends on embedding ethical principles into every team working on Sky Live. Adding these values from the start creates technology that is forward-looking, human-centred, and trustworthy.
AI sits at the heart of Sky Live, powering many of its core experiences. Taking pre-emptive steps to design AI responsibly means building applications that are effective, transparent, and genuinely aligned with customer needs.
03 · Workshop
I designed and facilitated a hybrid workshop with 26 attendees drawn from across Sky, all actively involved in Sky Live. The methodology combined a Design Sprint structure with an Ethical Design Framework, using Miro as the shared canvas.
Map customer risks related to privacy and security, evaluating the value, impact, and likelihood of each one.
Generate actionable mitigations the wider team could adopt, building confidence that identified risks could be addressed through design.
A mix of Design Sprint and Ethical Design frameworks in Miro: lightning talks on Ethical Explorer and Layers of Effect, mapping Tech Risk Zones, and identifying who owns each solution.
Customers didn't fully trust Sky on privacy. Consent wasn't always informed, and there was a gap between what the service did and what customers understood it to do.
A camera-first device in the home introduced real risks: potential for surveillance, stalking, or hacking. These needed explicit design mitigations, not just policy statements.
Issues beyond accessibility: face detection reliant on ML is fallible and risks excluding users. Children's rights, positive self-image, and non-violent values needed active consideration.
Watch Together's shared rooms raised risks of inappropriate broadcasts, children answering video calls without consent, and behaviour Sky had no existing guidelines to address.
Machine learning models risk encoding bias if not trained on sufficiently diverse datasets. Teams lacked a shared understanding of how models worked or how to design around their limitations.
04 · Initiatives
The workshop translated directly into a set of current initiatives and a forward-looking roadmap, with three asks escalated to the directors board.
Identify the severity and likely impact of privacy risks on purchase intent. Recommendations: include a privacy cover in the box, use the Welcome app to build trust, and design Gen 2 hardware around privacy from the start.
Publish a customer promise around privacy for Sky Live at launch. Surface it in marketing and onboarding. Define ethics principles for partners and internal teams, including partner statements on their approach to AI.
Accelerate on-device consent to 2022. Move beyond a single shared opt-out for Glass and Sky Live, introducing separate on-product opt-outs to give customers more granular control.
05 · Framework & Talks
The workshop was one piece of a broader effort to embed ethical thinking across design at Sky.
Using Ethical Explorer and Layers of Effect as foundations, I built a Miro-based framework for teams to adopt in their own projects. It served as a shared tool to spark discussion, align stakeholders, and surface the ethical impact of design decisions before they shipped.
I delivered a series of talks across Sky teams to raise awareness of how designers should approach AI responsibly. Topics covered practical AI use cases, machine learning concepts, model cards, and the unique UX challenges of AI-driven products, all pre-GenAI.