I build AI/ML systems that actually ship — from data infrastructure to production models. Ex-AWS L6 · WooliesX · Currently at Linkby.
Most AI projects fail not because the model was wrong — but because nobody solved the pipeline, the infrastructure, and the deployment. I have spent a decade building the 80% that everyone ignores. At AWS processing ten petabytes daily. At WooliesX serving millions of customers. At Linkby proving a one-pizza team can outship a conventional team three times its size.
What I have built across enterprise ML, data platforms, and AI consulting engagements.
Started alone. Built Linkby's entire data infrastructure from scratch — Databricks platform, ClickHouse analytics migration, AI-agent-driven Dagster pipelines. Hired an ML Engineer, then a Data Scientist. Shipped a personalised click pricing model live in production. Three people delivering at 10x the velocity of a conventional team.
Led a team of 10 building data-centric network availability strategies processing 10+ PB daily. Implemented LLM Ops automation that eliminated 100+ manual engineering processes at 10x productivity. Contributed to a patent for a novel ML algorithm for complex network applications.
Led a team of 21 delivering personalised recommendation engines across Web, App, and Email for millions of Woolworths customers. Built graph neural network and ALS-based systems with real-time feature engineering at national retail scale.
Architected a pipeline converting sales audio into structured deal documents and closing workflows. Built network analysis to map client org structures — identifying champions versus blockers. Deployed a Slack RAG agent letting the sales team query deal context in natural language.
Technical lead for a production clinical transcription system. Fine-tuned Whisper on Bahasa Indonesia and English medical audio for code-switching. Designed a PDP Law-compliant data architecture for hospital partnership deployments.
The Amazon Leadership Principles shaped how I think, build, and lead. After years at AWS and beyond, these are not just values I aspire to — they are the operating system I actually run on.
Every system I build starts with who uses it and what they actually need — not what is technically elegant. At WooliesX that was millions of Woolworths customers. At Linkby it is advertisers and publishers. The model serves the customer, not the other way around.
I do not hand off problems. When I joined Linkby alone with a blank slate, I owned the full stack — data engineering, ML, infrastructure, and delivery. No team to blame, no pipeline to wait on. Ownership means acting like it is your company even when it is not.
The best solutions I have built are the ones that removed complexity rather than added it. AI-agent-driven pipeline automation at Linkby. A model-agnostic architecture that lets you swap the underlying model with one config change. Simplicity is the hardest thing to engineer.
My thesis at Linkby was that a one-pizza team with the right AI tooling can outship a conventional team three times its size. Most people thought it was ambitious. Nine months later, the data supports it. Thinking big means setting targets that require you to change how you work, not just how hard.
Speed matters in ML. A good model shipped in six weeks beats a perfect model shipped never. I build iteratively — get something to production, measure it, improve it. The teams I have seen fail spent months perfecting a model that never left a notebook. Move first, refine continuously.
I do not delegate understanding. When ClickHouse replication slots caused WAL storage to compound at Linkby, I went all the way down to the PostgreSQL internals to diagnose it. When a model drifts, I do not just retrain — I understand why the data distribution shifted. Leaders who do not dive deep make bad calls.
The only thing that matters at the end of a project is whether it shipped and whether it worked. 30% uplift in customer CTR at Linkby. 10x productivity gains at AWS. A clinical transcription system live in Indonesian hospitals. Results are not slides. They are systems running in production.
I got a ClickHouse certification in December 2025 while running a full production migration. I fine-tuned Whisper on Indonesian medical audio because nobody else had solved that specific problem. Curiosity is what keeps my technical instincts sharp when the field moves as fast as AI does right now.
I will not ship a model without monitoring. I will not deploy a pipeline without alerting. I have seen what happens when teams lower the bar on production standards — silent failures that erode stakeholder trust for months. High standards are not perfectionism. They are the minimum viable reliability.
I write about what actually breaks in production, not just what works. I flag problems to stakeholders before they become incidents. At AWS I was the person teams came to when something critical needed an honest assessment. Trust is built through consistency and transparency, not polish.
I have led teams of up to 21. The best hires I made were people who challenged my thinking within their first month. At Linkby, I hired an ML Engineer who took ClickHouse and Dagster further than I had planned. Developing people means giving them hard problems, not easy wins.
I have pushed back on architectural decisions that would have created technical debt. I have argued against deploying models that were not ready for production. But once the decision is made, I commit fully. The worst outcome is half-hearted execution on a decision you disagree with but never challenged.
Practical insights on production ML, AI agents, and what actually works when shipping AI at scale.
I do not just build AI systems for clients — I run my own workflows on Claude Code and AI agents every day.
My primary environment for building data pipelines and prototyping ML systems. The key is deep contextualisation — a well-briefed session moves faster than working alone and catches things I would miss.
At Linkby, built Dagster pipelines where AI agents write, test, and deploy transformations. Agents handle boilerplate. I focus on data model decisions and business logic that requires domain knowledge.
Every system I build decouples the interface from the model. Route repetitive tasks to cost-efficient models (Qwen). Route complex tasks to stronger ones. Swapping models should require changing one config line.
Define the task precisely before building the agent. Build model-agnostic from day one. Put humans at the right checkpoints. Cost-profile every workflow. Log everything — agents fail in unexpected ways without visibility.
Built Slack AI agents with RAG on sales transcripts — teams query deal context in natural language, surface relationship intelligence via network analysis on client org structures.
Fine-tuned Whisper for Bahasa Indonesia medical audio with English code-switching. Exploring domain-specific fine-tuning of smaller models where large general models are overkill.
Available for consulting engagements in production ML, data platform architecture, and AI systems. Not pilots or strategy decks — systems that actually ship.