17 years · 4 platforms · Meta · Amazon · Google ecosystems

How I
think.

Seven years as a Senior PM across augmented reality, enterprise SaaS, virtual collaboration, and spatial computing. What follows is not a list of things I did — it is how I approached the problems that mattered.

The short version

Platform thinking

Built and ran creator ecosystems across AR and spatial computing where the product only works if other people build on top of it. Scaled creator programmes, tooling, and developer infrastructure from zero.

0→1 in enterprise

Took a frontline shift management product from whitespace to a live customer pilot — research, strategy, build, and validation — in under a year. The hardest kind of PM work: no prior art, no template.

Technical depth

10 years as an embedded systems engineer before product. I can read a codebase, evaluate an architecture, and have a real conversation with engineers about trade-offs — without needing it translated.

AI-native practice

Formally assessed as one of the most advanced AI practitioners in my organisation. 860+ AI sessions per month. I use AI the way a senior engineer uses tooling — to understand systems, not just to generate output.

Stability before scale

Drove stability initiatives on a spatial computing platform that laid the foundations for sustainable growth to over 1M monthly active users. Growth built on shaky ground does not hold.

Matrixed execution

Coordinated 200+ engineers across 5+ teams at Meta Reality Labs. Alignment across large organisations is a skill — it does not happen by itself and it does not scale without deliberate infrastructure.

Selected work

01 — Most recent

Becoming an AI-first product manager

Cross-cutting · Spatial computing platform · Formally assessed

Most product managers treat AI as a productivity shortcut. I approached it as a discipline — building workflows where AI gives me the same depth of system understanding that used to require an engineer in the room.

Managing a complex runtime platform meant needing code-level understanding to make sound decisions. The traditional approach — scheduling deep-dives, waiting for engineers to explain things — was too slow and too dependent on other people's availability. I needed genuine technical grounding without creating a bottleneck for anyone else.

I built what I call an Architect workflow — using AI primarily to read and understand codebases rather than generate code. I adopted 8 tools across 19 surfaces and ran 860+ AI-assisted sessions in a single month. The discipline was in model selection: lightweight models for quick lookups, more capable ones for deep architecture analysis. The result was engineer-level codebase understanding I could act on independently.

I could evaluate engineering proposals with real technical grounding, identify risks earlier, and contribute directly to performance tooling decisions. I committed a diff with 3,000+ lines changed — unusual for any PM. Formally assessed as AI-First — second-highest maturity tier, P87 frequency and P71 depth among all product managers in the organisation.

AI-First · P87 frequency · P71 depth 860+ sessions / month 8 tools · 19 surfaces 3,000+ line diff shipped

02 — Spatial computing

Earning the right to scale

Meta Reality Labs · Spatial computing · Creator economy

A platform with performance and stability problems does not deserve to grow. Before any conversation about features or growth numbers, the work was to make the platform trustworthy enough for creators to build on it seriously.

The platform had an engaged creator community but chronic stability issues that undermined confidence. Creators were building on a surface they couldn't fully trust, which limited what they were willing to attempt. The question was not which features to add — it was whether the platform had earned the right to ask creators to invest more in it.

I drove a stability programme addressing root causes of creator friction — performance tooling that let creators find and fix their own bottlenecks, and a Docathon that produced 31 new docs, updated 55, and deprecated 81. Only when that foundation was solid did I introduce creator competitions and ship persistent variables and leaderboards — the features creators wanted, but that would have failed on an unstable platform.

The platform grew to over 1M monthly active users. But the outcome I am most proud of is that creators started building more ambitious things. That only happens when they trust the surface they are building on.

Over 1M monthly active users Creator competitions launched 86 documentation changes Performance tooling shipped

03 — Enterprise SaaS

Building for a market nobody had mapped

Meta · Enterprise communication · Frontline workforce · 0→1

80% of the global workforce are deskless workers. Almost no enterprise communication product was built for them. That is not an oversight — it is a hard problem, because the people who design software are rarely the people who work shifts.

The enterprise platform served knowledge workers well. Shift-based frontline workers — food manufacturing, logistics, retail — were invisible to it. They managed shift cover with WhatsApp groups and phone calls, and a missed shift meant real operational cost. The question was whether software could actually change that behaviour, or whether the problem was too embedded in how frontline work was organised.

I started with extended fieldwork — observing how shift managers and workers actually communicated, not how they said they did. The insights shaped a phased strategy: first solve the most acute pain (shift cover requests), then build toward full schedule management, then integrate with existing WFM systems. I partnered with a major food manufacturer to run a real-world pilot with actual workers on actual shifts — not a prototype, a live test of whether the product changed behaviour.

Shift cover requests that previously took hours resolved in minutes in the pilot. The partner confirmed the core value proposition. The product and strategy became the foundation for the company's broader frontline worker investment — which is what 0→1 PM work is actually trying to do: create the conditions for a much larger bet.

Live pilot validated Hours → minutes for shift cover Foundation for company-wide initiative

04 — Collaboration

The unglamorous work that actually drives retention

Meta · Virtual collaboration · Video conferencing · Calendar

In a video conferencing product, the most important moment is whether someone successfully gets into the meeting. Everything else is secondary. We were failing at that moment more than we should have been.

Calendar sync failures were causing meetings to disappear or show incorrect details. The join flow had unnecessary friction. These were not exciting product problems — they were reliability issues that quietly erode trust and push users toward competitors who just work. In a product competing against established tools, the bar for basic reliability is high and unforgiving.

I partnered with infrastructure engineering to find and fix the root causes of calendar sync failures — not just the symptoms. Alongside that, I redesigned the accept/decline flow, shipped calendar edit mode, and launched a lightweight meeting experience that reduced load times for ad-hoc conversations. Every change was motivated by one question: does this make it more likely that someone gets into their meeting?

Calendar sync reliability improved significantly. Meeting join completion rate improved. Support tickets related to meeting confusion fell. These are not headline numbers — they are the numbers that determine whether users stay long enough to discover the features that actually differentiate the product.

Calendar sync reliability improved Join completion rate up Lightweight meeting mode launched

05 — AR platform

Lowering the barrier to creation at scale

Meta Spark AR · Creator tools · Developer ecosystem

A creator platform is only as good as the creators willing to use it. If the barrier to first creation is too high, you lose the people who would have made the most interesting things — because they give up before they find out what they are capable of.

The AR creation tool was powerful but asked too much of new creators. A blank canvas and complex tooling meant high drop-off early in the journey. The content supply problem was actually an onboarding problem — we were not giving new creators enough reason to believe they could do this.

I led the design and launch of a templates system — pre-built AR experiences creators could modify and publish rather than build from scratch. Alongside that, a guided Studio Tour that walked new users through the key features at the moment they needed them. I also introduced structured feedback channels that fed directly into the roadmap — so creators could see that their input changed what got built next. That loop compounds.

Templates became the highest-adoption entry point for new creators. New users were publishing their first AR effect in minutes rather than hours. Creator satisfaction improved as feedback was visibly actioned — and the community grew because people felt the product was being built with them, not just for them.

Templates → highest-adoption entry point First publish: hours → minutes Creator satisfaction improved

06 — AI-native practice

How I actually use AI

Personal practice · Independent research

I use AI inside the process, not at the end of it. Stress-testing decisions before I commit. Synthesising signal before forming a view. Connecting it to data sources and structured workflows rather than asking one-off questions.

Knowing which mode a task calls for — thinking partner, execution layer, pipeline component — and not defaulting to the easiest one. Most people default to the chat box. The leverage is in knowing when that is the wrong tool.

I documented this honestly: what workflows exist, where AI genuinely adds leverage, and where it flatters without improving. I ran a structured experiment asking both Claude and ChatGPT to independently assess my usage across five dimensions — frequency, breadth, workflow sophistication, dependency level, and output leverage. Both converged on the same observation: the meaningful distinction is not how often someone uses AI. It is whether they have rebuilt how they work around it.

A quantification framework and a builder's log — a detailed account of what AI-native practice actually looks like in a product management context. The dual-model assessment removed the self-serving bias that comes from evaluating your own practice. The convergence between two independent models on the same conclusions gave the result credibility.

23/25 framework score Top 1–2% globally Dual-model assessment · Claude + ChatGPT

AI practice

I use AI the way a senior engineer uses their tooling — not to replace judgement, but to extend it. The Architect workflow: understand the system before touching it. Plan first, build second.

Sessions / month

860+

Across coding, research, strategy, and document creation

Maturity tier

AI First

Formally assessed — P87 frequency, P71 depth among all PMs

Tools × surfaces

8 × 19

Eight tools across nineteen workflow surfaces — integrated, not bolted on

Profile

The Architect

100% read-oriented. Understand the system before building it

How I work

Platform thinking

Platforms only work when others build on top of them. I have run creator and developer ecosystems where my job was to make building easy enough that talented people chose our surface over alternatives — and kept choosing it.

Technical fluency

10 years as an embedded systems engineer. I can read a codebase, evaluate an architecture decision, and contribute to technical conversations without needing translation. AI has extended this further than I expected.

0→1 in hard markets

Built a frontline workforce product from nothing — no template, no prior art, no guaranteed outcome. The skill is knowing when you have learned enough to commit to a direction, and when you haven't.

Matrixed execution

Coordinated 200+ engineers across 5+ teams. Alignment in large organisations requires deliberate infrastructure — clear decisions, visible trade-offs, and stakeholders who trust the process even when they disagree with the outcome.

AI-native working

Formally assessed AI-First practitioner. I have built repeatable workflows where AI gives me engineer-level depth in codebases and research-analyst speed in strategy work. This is not a feature I have — it is how I work.

Stability before scale

The hardest product judgement is often knowing that a platform needs to be made trustworthy before it deserves to grow. I have made that call and executed against it, even when the pressure was to ship features instead.