Groupon AI Commercialization Architect Roland Abou Younes shares how he built an AI operating system for an entire engineering organization using Claude Code — and why the real work of AI adoption is infrastructure, not inspiration.
When I joined Groupon in early 2025, the question was: can AI help? Twelve months later, the question has flipped to: why are you still doing this manually?
Takeaway at the end.
On the one year anniversary of work with Groupon it coincided with the day Nikash R. my first VP at Groupon told me "I can see now, how you were building an operating system and I see how others are building apps on it now."
This article was originally written for internal use where I highlight my achievements and share a fully detailed environment with resources where people can read the first paragraph and feed the link their Claude code and see how their setup can benefit from the resources linked.
I have been told recently to explain the AI enablement in my title, and I thought it's a nice time to reflect on my journey with Groupon for the past year. For many this document will sound like, yes we saw this as it emerges! What's new, but for a few this might be a revelation on what I have been doing in Groupon so far.
When I joined Groupon in early 2025, the engineering org's relationship with AI was roughly: "Can ChatGPT do development?" A handful of engineers had personal subscriptions. There was no shared infrastructure, no procurement framework, no knowledge of how token economics work, no institutional approach to AI tooling. We had a few curious individuals and a lot of uncertainty.
Twelve months later, most of the engineering organization rarely writes code by hand. Engineers run parallel AI agents across workstreams. Teams that were estimating tasks in weeks are delivering in days. The question has flipped from "can AI help?" to "why are you still doing this manually?"
Few people drove this transformation together with me more than anyone else: CEO Dusan Senkypl with his vision, our CTO Ales Drabek with his trust support and strategic guidance, Viktor Bezdek always running POCs on the latest tech the day they are released. Dusan set the tone from the top, his consistent message was "forget about the cost, focus on ROI", which gave everyone permission to experiment without fear. Viktor brought deep technical credibility and built the advanced patterns (self-improving agents, Claude skills architecture) that showed engineers what was actually possible. My role was the connective tissue: I built the education platform, managed the vendor relationships, next to Radek Urban and Olga Linhart drove the AI procurement and governance frameworks, onboarded business stakeholders to what this technology means commercially, and connected the people doing interesting work to each other.
Ales Drabek, our CTO, backed this consistently, approving the enterprise seats, supporting the vendor negotiations, requesting the architectures, and giving me the latitude to run GFAISS, manage the supplier relationships, and push the agentic commerce exploration forward. Without that top-down cover from Dusan and Ales, none of this scales beyond one person's side project.
What follows is the factual record of what I did, what I directed AI to build, what I shaped through collaboration, what I led through vendor and partner relationships, and whose work I amplified.
The operating system is broader than the boilerplate: it's the procurement decisions, the vendor relationships, the governance frameworks, the education sessions, and the cross-team coordination that created the conditions for adoption
The Recipe: What my "Claude Code Operating System" Actually Consists of.
It's the management layer, the coordination framework, and the enablement infrastructure that made it possible for teams across the organization to build their own AI-powered workflows on a shared foundation.
The Claude Code Boilerplate is the last step in that chain, the concrete, shareable artifact that gives any engineer a running start. But the operating system is broader: it's the procurement decisions, the vendor relationships, the governance frameworks, the education sessions, and the cross-team coordination that created the conditions for adoption. The boilerplate is where all of that lands in a form someone can actually copy into their project and start using.
What the Boilerplate Provides
The boilerplate is structured as a copyable environment. An engineer drops it into their project directory, runs a few setup commands, and gets a standardized AI working environment with integrations already wired up.
It contains several categories of skills (structured instruction files that the AI follows step-by-step):
Integration Installers — Guided setup scripts for connecting AI to the tools engineers already use: project management, issue tracking, documentation, source control, browser automation, knowledge graphs, and internal service catalogs. These installers detect the local environment, walk through credential gathering, write configuration, and verify the connection works. Setting up these integrations manually is error-prone and time-consuming; the installers make it repeatable. Some of these integrations were developed by other engineers in the organization who contributed their setup patterns back to the boilerplate.
Session Management — Skills that give the AI persistent context across conversations. The AI loads what it learned in previous sessions, checks that all integrations are healthy, and at the end of a session summarizes what was done and persists that knowledge. Without this, every conversation starts from zero.
Memory Architecture — A three-layer system:
Transient — Claude's built-in conversation memory, scoped to the current session.
Persistent — A semantic and graph-based knowledge layer (Cognee) that is searchable across all past sessions and supports structured queries against accumulated knowledge.
Long-term validated — Verified markdown files and entries in the company's documentation systems. These are reviewed, stable, and authoritative — the knowledge that has been confirmed correct and worth preserving permanently.
The AI accumulates knowledge over time — conventions, lessons, domain context — and that knowledge compounds rather than being lost between conversations.
Skill Management — Skills that manage the skill ecosystem itself: adding new skills, auditing existing ones for relevance, assessing whether a proposed skill fits before adding complexity. Before designing a new skill or updating an existing one, the system researches best practices — examining what patterns exist, what has worked elsewhere, and what the current state of the art looks like — so that skills are informed by more than just the immediate need.
Lessons Learned with Automated Skill Enhancement — At the end of each session, the system extracts what went wrong, what conventions were discovered, what tool behaviors were unexpected. These lessons feed back into skill improvements automatically, so the environment gets better with use.
Contribution and Sharing Infrastructure — A structure designed to let engineers share what they build. Individual environment snapshots can be sanitized (secrets stripped across multiple review passes) and uploaded for others to reference. Patterns discovered in one project can be generalized and staged for the shared boilerplate. Contributions from other teams' setups can be pulled in. This is what turned a personal setup into an organizational resource.
Quality and Standards Skills — Skills for scoring issue tickets against quality rubrics, improving ticket descriptions to meet minimum thresholds, generating QA testing guides, and running pre-PR code reviews against engineering standards. These encode team conventions into repeatable checks.
Domain Knowledge Skills — AI-powered knowledge bases for specific services, where engineers can ask natural language questions about how systems work. These were built as a separate project and their setup is provided as an installer in the boilerplate.
Orchestration Skills — Patterns contributed by others in the organization, including a Chief of Staff orchestrator (Dusan's pattern) and multi-model review workflows. These sit on top of the skill system and coordinate complex multi-step processes.
Takeaway:
Do not copy someone else’s skill.
Make every task a new claude code session.
Make sure after every session your claude stores the session information and lessons learned.
Make sure the lessons learned are integrated into skills.
Make sure the skills fit together and not overlap.
Kill skills that become obsolete.
Create cron jobs for agents for recurrent tasks that run a headless claude session and not in a normal session.
How do you do all this right? You don’t, you ask claude to do it for you.
How many tokens does this cost? Less than your hourly rate X amount of hours you would need to make it for sure.
What if my company won’t cover the cost? I’d ask you a few questions to answer this:
Is it the right company for you to work in?
Will the company be here for long?
If you finance claude from your own pocket wouldn’t it still be worth it?
We're hiring engineers who want to build at this level. Explore open roles at Groupon: grouponcareers.com