Vibe Code Janitor
Vibe Code Janitor | EARNST
Vibe coding ships fast. But is it production ready? We make code from Cursor, Copilot, and Claude deployment-ready. Then we help you get users: clean tracking, targeted ads, measurable results.
What this service does
Vibe Code Janitor transforms AI-generated prototypes into production-ready systems that can handle real users, real data, and real problems.
Your campaign data disappears when users actually convert. First test purchase works perfectly: event fires, conversion shows up in your Google Ads dashboard. But when real customers retry checkout after switching payment methods or applying promo codes, events fire twice or not at all. Your conversion tracking shows impossible numbers, you can't tell which campaigns actually drive sales. Code Janitor tests every user flow under real conditions—slow connections, consent banner interactions, mobile browsers—ensuring conversion tracking works reliably when ad spend starts flowing.
A single security vulnerability can cost 4.35 million EUR in breach damages. AI code from Cursor or Copilot often contains critical flaws: missing authentication checks, exposed database credentials, unvalidated user inputs. Deploying without audit means accepting liability you cannot quantify. A 3,000 EUR code audit identifies these risks before launch, provides documented due diligence, and prevents regulatory fines. Code Janitor gives you insurance-grade risk assessment instead of crossing your fingers at deployment.
70% test coverage or production incidents every week. Copilot and Claude write business logic but zero tests. When anyone refactors the codebase, features break silently until users report bugs. Code Janitor implements Jest unit tests for critical functions, Playwright end-to-end tests for user flows, integration tests for authentication and payment flows. Plus GitHub Actions CI/CD pipeline: automated tests on every pull request, branch protection rules enforce passing builds before merge, security scanning with Snyk catches vulnerable dependencies.
Vibe coding with Cursor, GitHub Copilot, and Claude generates functional code fast, but that code rarely includes error handling, security hardening, or automated tests. It works as a demo but breaks under production load. We audit the codebase, identify technical debt, fix critical issues, add test coverage, and document architectural decisions. This is surgical improvement, not complete rewrites: fixing security vulnerabilities, adding proper error boundaries, optimizing performance bottlenecks, writing tests for critical paths, and documenting how everything works so your team can maintain it.
Tracking reliability determines campaign success, not creative. When conversion events only fire 60% of the time because of unhandled network errors, your campaign optimization works with incomplete data. You scale the wrong ad sets, pause performing keywords, blow budgets on audiences that look good in broken analytics. Code Janitor tests tracking implementation across browsers, devices, and network conditions. Every event must fire reliably or your entire marketing stack makes wrong decisions.
Undocumented code creates vendor lock-in worth 50,000+ EUR annually. When the developer who built your AI-generated app leaves, nobody else understands the codebase. You pay premium rates for maintenance, cannot switch agencies, delay critical updates because nobody dares touch the code. Code Janitor creates architecture documentation, decision logs, and clean code structure that any qualified developer can understand. This eliminates dependency on specific individuals, reduces maintenance costs, increases negotiating power with service providers.
Production-grade tooling catches problems before users do. TypeScript strict mode enabled, ESLint configured with recommended-requiring-type-checking ruleset, Prettier enforces consistent formatting. Husky git hooks run tests and linting before commits, GitHub Actions runs full test suite on pull requests. Runtime validation with Zod schemas, error monitoring via Sentry with source maps, React Error Boundaries prevent white screens. Security scans with npm audit and Snyk, automated dependency updates via Dependabot, secrets management with environment variables and .env.example templates.
Who needs this?
Teams that built working prototypes with AI coding tools but face deployment anxiety when real users and real money are on the line.
You need reliable conversion data from day one of paid campaigns. Classic scenario: developers hand you an app built with Cursor, you launch Google Ads and Meta campaigns, spend 8,000 EUR in the first month, but tracking data is inconsistent. Some conversions appear, some disappear, attribution looks wrong, you cannot tell which campaigns actually work. Code Janitor audits and fixes tracking implementation BEFORE you start spending, ensuring every euro of ad budget generates actionable data. Launch with confidence instead of debugging tracking while campaigns burn budget.
You need documented risk assessment before committing to production. Your team built an MVP with Claude or Copilot, it works in demos, but you don't know what breaks at scale or whether there are security vulnerabilities. Investors ask about technical due diligence, compliance requires documented security measures, insurance wants proof of best practices. Code Janitor provides formal audit reports covering security, scalability, data protection. This is investment protection: understand the risks, quantify the costs, make informed deployment decisions.
You need automated tests and CI/CD before the codebase becomes unmaintainable. Common scenario: junior developers used Copilot to build features fast, everything works now, but there are zero tests. You need to refactor the authentication flow, add new payment methods, upgrade dependencies, but every change risks breaking existing functionality. Code Janitor implements comprehensive test suites (Jest for logic, Playwright for user flows), sets up GitHub Actions pipelines with automated testing, configures branch protection requiring passing builds. Future changes become safe instead of gambling.
Typical clients: startups with AI-built MVPs approaching launch, agencies who used AI coding tools to accelerate client projects, product teams who prototyped with Claude and now need to deploy, companies with junior developers using Copilot who need senior code oversight. If you're asking "is this code safe to deploy?" the answer is probably not yet.
How EARNST approaches it
We start with a comprehensive audit, then fix issues systematically based on severity: critical security vulnerabilities first, then production bugs, then technical debt.
Tracking audit comes first, campaign setup comes last. We test every conversion event under real conditions: slow mobile networks, consent banner declined, ad blockers enabled, Safari privacy features active. We identify which events fire reliably, which fail intermittently, which send wrong data. Then we fix tracking implementation before any campaign launches. You get a tracking validation report showing exactly which user actions are measurable and which need fixes. Launch campaigns only after tracking works perfectly, not before.
Prioritized risk assessment with cost-benefit analysis. The audit categorizes every issue by business impact: critical (data breach risk, legal liability), high (revenue loss from production failures), medium (future maintenance costs), low (nice-to-have improvements). You receive a decision matrix showing fix costs versus risk costs. Example: fixing SQL injection vulnerability costs 1,200 EUR, potential breach costs 50,000 EUR minimum. You decide which issues get fixed immediately and which are acceptable risks. No technical jargon, just ROI calculations.
Systematic refactoring with version control and testing. Every change goes through pull request review, every fix includes tests proving it works, every commit has clear description of what changed and why. We use feature branches, write tests before fixes (TDD when appropriate), run full test suite after every change. Refactoring priorities: security vulnerabilities (Snyk scan results), error handling (add try/catch and proper logging), test coverage (aim for 70%+ on critical paths), performance optimization (React.memo, lazy loading, query optimization), documentation (architecture diagrams, setup guides, API documentation).
The refactoring process is surgical, not wholesale rewriting. We fix critical security issues first, add error handling and logging, write tests for essential functionality, optimize performance where needed, and document architectural decisions. Every change is version controlled, tested, and explained. The goal is "good enough to deploy confidently and maintain efficiently," not perfect code that takes six months to ship.
Project scope
Typical Code Janitor engagement takes 2 to 4 weeks: 1-2 days audit, then systematic fixes for security, bugs, testing, and documentation.
Fast-track option for campaign launches. You need tracking ready in one week because campaign launch is scheduled and ads are already approved. We offer priority audit focusing exclusively on conversion tracking implementation: GA4 events, Google Ads conversion actions, Meta Pixel setup. Deliverable: working tracking stack tested across devices and browsers, tracking validation report, campaign-ready conversion setup. Standard audit covers broader code quality, fast-track focuses on making marketing measurable immediately. From 1,500 EUR for tracking-only audit.
Flexible engagement models matching your risk tolerance. Three options: (1) Audit-only (500-3,000 EUR): we identify issues, your team fixes them. You get documented risk assessment without implementation costs. (2) Audit + critical fixes (3,000-8,000 EUR): we fix security vulnerabilities and production-breaking bugs, your team handles the rest. Minimum viable production readiness. (3) Full cleanup (5,000-15,000 EUR): we handle everything including tests, documentation, deployment setup. Choose based on your team's capacity and timeline pressure. Most clients pick option 2, then engage us for ongoing code review at 800 EUR/month.
Comprehensive deliverables with complete handoff documentation. You receive: (1) Audit report in Markdown format covering security scan results (npm audit, Snyk), performance profiling, architecture assessment, dependency analysis. (2) Refactored codebase with Git history showing exactly what changed. (3) Test suite: Jest config with 70%+ coverage on critical paths, Playwright E2E tests for main user flows, GitHub Actions workflow config. (4) Documentation: README with setup steps, architecture decision records (ADRs), API documentation, deployment guide for your hosting environment. (5) Security hardening: environment variable templates, secrets management guide, CSP headers config, rate limiting implementation.
Very small projects (single page apps, simple scripts) can be audited in 3-5 days. Large, complex systems may require 6-8 weeks. Ongoing support available on monthly retainer basis for teams who want continued oversight of AI-generated code contributions.
Phase 2: Launch & Grow
The code is clean, tests pass, deployment works. Now the question becomes: how do users find the app, and how do you measure what works?
Tracking Setup
Conversion tracking must be implemented before launch, not after. Missing the first weeks of user data means you cannot tell whether your product-market fit works or which acquisition channels deliver results.
Launch campaigns with working tracking or waste your entire first-month budget. Classic mistake: launch without GA4 events, spend 5,000 EUR on Google Ads in week 1, see traffic but zero conversion data because events are not implemented yet. Week 2 you add tracking retroactively, but week 1 data is lost forever. You don't know which keywords, ad copy, or audiences drove signups. 5,000 EUR spent, zero learnings. Code Janitor implements conversion tracking BEFORE campaigns launch: GA4 events for every key action, Google Ads conversion actions linked correctly, Meta Pixel with purchase events. Day 1 of ad spend generates actionable attribution data.
One vendor instead of coordinating three agencies who blame each other. Standard setup: developers build the app, marketing agency runs campaigns, analytics consultant implements tracking. When conversion data looks wrong, the marketing agency blames the developers for broken checkout, developers blame the analytics consultant for wrong tracking code, analytics consultant blames the marketing agency for misconfigured campaign settings. Three invoices, zero accountability. Code Janitor eliminates this coordination overhead: we already know the codebase from the audit, implement tracking directly, launch the campaigns. One project, one budget, one point of contact. When something breaks, we fix it instead of scheduling alignment calls.
Server-side event tracking eliminates client-side tracking failures. Client-side Google Tag Manager means 30-40% event loss: ad blockers block scripts, Safari ITP blocks third-party cookies, users close tabs before events fire, consent management delays script loading. Code Janitor implements server-side tracking: events fire directly in backend code after successful database writes (e.g., after Stripe payment confirmation), GTM Server-Side Container receives events via Measurement Protocol, first-party cookies work in all browsers, zero performance impact on frontend. Plus: event data includes server-side context (user authentication status, subscription tier, LTV) not available in client-side tracking.
We set up GA4 events for actions that matter to your business: signups, purchases, trial starts, feature activations, checkout steps. No generic templates, but events tailored to your app's specific user flows. Plus conversion tracking for Google Ads and Meta configured correctly, so every advertising euro maps to a measurable action. The advantage over pure marketing agencies: we already know your codebase architecture, understand where data flows happen, implement events directly in code instead of fragile tag manager configurations that break on the next deployment.
Event Implementation
Reliable tracking requires events implemented at the right moment in the right part of your code, not added as afterthought scripts in the frontend.
Events must fire at the exact moment of conversion, not when the page loads. Common problem: purchase event fires when the thank-you page loads, but 20% of users close the tab immediately after payment before the page finishes loading. Your conversion data shows 80% of actual purchases. You optimize campaigns based on incomplete data, scale the wrong ad sets, waste budget. Code Janitor implements events directly in the payment confirmation flow: event fires server-side after Stripe webhook confirms successful payment, before the thank-you page even loads. 100% of conversions captured, attribution data is complete, campaign optimization works with real numbers.
Missing conversion data costs 20-30% of marketing budget in wasted spend. When tracking only captures 70% of conversions because events fail intermittently, your cost-per-acquisition calculations are wrong by 30%. You think CPA is 50 EUR when real CPA is 35 EUR. You pause profitable campaigns, miss scaling opportunities, allocate budget to underperforming channels. Accurate tracking is not a technical detail—it is financial infrastructure. A 2,000 EUR investment in proper event implementation saves 6,000+ EUR annually in prevented budget waste on a 30,000 EUR annual ad spend.
Events belong in backend application code, not injected via tag management. We implement events as part of your application logic: after successful user registration, fire signup event via GA4 Measurement Protocol. After Stripe payment webhook confirms purchase, fire purchase event with transaction_id, revenue, items array. After feature activation in your database, fire custom event with feature name and user tier. Events use structured schemas (TypeScript types for event parameters), get tested in unit tests (mock GA4 client, verify event payload), include error handling (retry logic for failed requests, logging for debugging). This is production-grade instrumentation, not script tags pasted into HTML.
Campaign Launch
After tracking works reliably, we launch targeted campaigns on Google Ads and Meta with clear performance metrics from day one.
Start with Search campaigns for bottom-of-funnel intent, then expand to awareness. Week 1: Google Ads Search targeting high-intent keywords (people actively searching for your solution), exact match and phrase match, conversion-focused bidding. Budget 1,000-2,000 EUR to validate conversion tracking works and gather initial CPA data. Week 2-3: if Search CPA is profitable, add Google Shopping or Performance Max for broader reach. Week 4: launch Meta Ads for cold audiences (interest targeting, lookalike audiences), remarketing for website visitors. This staged approach validates tracking early with high-intent traffic before spending on awareness campaigns where attribution is harder.
Clear ROI targets before spending, not vague awareness goals. Every campaign launches with defined success metrics: target cost-per-acquisition (e.g., 40 EUR for SaaS signup), maximum acceptable CPA before pausing (e.g., 80 EUR), target ROAS for e-commerce (e.g., 300% = 3 EUR revenue per 1 EUR ad spend). You see real-time dashboard showing actual performance versus targets. After 2 weeks and 2,000 EUR spend, you know whether the channel works or needs optimization. No three-month "brand awareness" campaigns with unmeasurable results. Just performance marketing with clear payback periods.
Campaign structure maps to product features and user segments, not generic templates. Google Ads account structure: separate campaigns per product/service, separate ad groups per feature or use case, ad copy references specific features, landing pages match ad copy promises. Conversion tracking: separate conversion actions for signup, trial start, paid purchase (each with different values for Smart Bidding). Meta Ads: campaign budget optimization (CBO) enabled, 3-5 ad sets per campaign testing different audiences, 2-3 ad creatives per ad set testing different messaging angles. Tracking: Meta Pixel standard events (Purchase, Lead) plus custom events for product-specific actions. Everything connected to GA4 for cross-platform attribution analysis.
Ongoing Optimization
After launch, campaigns need continuous optimization based on conversion data, not monthly check-ins.
Weekly optimization cycles catch performance drops before they waste budgets. Week 1: gather baseline data, identify best-performing keywords/audiences/ad creatives. Week 2: pause underperforming ad sets (CPA above target), increase bids on top performers, test new ad copy variations. Week 3: analyze conversion data by device, location, time-of-day—often mobile performs differently than desktop, certain regions have better conversion rates. Week 4: implement learnings as campaign structure changes, add negative keywords, refine audience targeting. This is active management, not set-and-forget campaigns that drift off target.
Monthly reporting shows spend, conversions, ROI—no vanity metrics. You receive dashboard access showing: total ad spend per channel, total conversions, cost-per-acquisition, revenue per channel (if applicable), ROAS. No impressions, clicks, or engagement metrics without conversion context. Report includes: what we tested, what worked, what we paused, recommended budget allocation for next month. Example: "Google Search CPA at 35 EUR (target 40 EUR), recommend increasing budget by 50%. Meta Ads CPA at 95 EUR (target 80 EUR), paused 3 underperforming ad sets, testing new creative angles." Clear decisions, clear accountability.
Optimization uses GA4 data layer plus platform attribution for complete picture. We analyze performance in three layers: (1) Google Ads / Meta Ads dashboard data (last-click attribution, platform perspective). (2) GA4 attribution reports (data-driven attribution, cross-channel view). (3) Server-side conversion logs (ground truth, especially for subscription businesses where LTV matters). Combine all three to understand: which channels drive immediate conversions (Ads dashboard), which channels assist conversions (GA4 multi-touch attribution), which channels drive high-LTV customers (server-side data joined with CRM). Optimization decisions based on complete data, not single-platform view.
Before / After: Typical Vibe Code Project
Test Coverage
0Security Issues (Critical)
8Lighthouse Score
45Build Time
180Test Coverage
72Security Issues (Critical)
0Lighthouse Score
92Build Time
24Typical Security Findings in AI-Generated Code
From Audit to Campaign
Week 1
Code Audit
Week 2–4
Refactoring & Tests
Week 5
Tracking Setup
Week 6+
Ads & Growth
Typical Results
70%+
Test coverage after Code Janitor engagement
0
Critical security issues after audit
100%
Documented, maintainable code
Day 1
Tracking live from launch
What you get
Code Audit & Report
Comprehensive review of security issues, bugs, and architectural problems.
Refactoring Plan
Prioritized list of what needs fixing and why it matters.
Test Suite
Automated tests covering critical functionality and edge cases.
Security Check
Vulnerability scan, dependency audit, and security hardening.
Technical Documentation
Architecture overview, setup guide, and deployment documentation.
Analytics & Tracking Setup
GA4 events for key actions, conversion tracking for Google Ads and Meta. Every relevant user action captured.
Campaign Setup
Google Ads and Meta Ads campaigns with correct conversion tracking. Structured, measurable, ROAS-optimized.
Launch Dashboard
Real-time overview: conversions, traffic sources, cost-per-acquisition. Numbers instead of guesswork.
“Ernst is the marketing professional you want by your side when the fires of disruption are raging.”
Bradford Goodwin
Inhaber, Malcontent Marketing
Frequently Asked Questions
Which AI tools do you work with?
Cursor, GitHub Copilot, Claude (Code and Artifacts), ChatGPT, v0, Bolt. We know the typical patterns and problems each tool produces and what to watch out for.
How much does a code audit cost?
The code audit starts at 500 EUR. You receive a report covering security, performance, and maintainability assessment within 3-5 business days.
How much does the cleanup cost?
Based on the audit findings: 2,000 to 5,000 EUR depending on scope. Includes error handling, tests (70%+ coverage), security fixes, and documentation.
Is there a launch package?
Yes. From 2,500 EUR: tracking setup (server-side), Google/Meta Ads campaign, and launch support. Your code becomes not only production-ready but also visible.
Do you do the refactoring or just consulting?
Both. We can deliver the code audit as a report only (your team implements) or handle the complete refactoring. Often we fix the critical issues and your team works through the medium priorities.
How does ongoing code review work?
We review pull requests before they are merged, similar to a senior engineer on your team. Via GitHub/GitLab integration, with comments directly in the code. Monthly package or per PR.
Is AI-generated code really that bad?
Not bad, but unreliable. AI produces functional code but misses edge cases, security best practices, and long-term maintainability. Fine for prototypes, not for production with real users.
Do you also handle marketing for the finished app?
Yes. Phase 2 after code cleanup: we set up GA4 events, configure conversion tracking for Google Ads and Meta, and launch the first campaigns. Since we already know your code, we implement events directly, without tag manager chaos, no briefing a third-party agency.
Which advertising platforms do you use?
Google Ads (Search, Display, Performance Max) and Meta Ads (Facebook, Instagram). Plus GA4 as analytics foundation and GTM Server-Side for clean first-party tracking. Everything GDPR-compliant with Consent Mode v2.
You might also be interested in
Tracking & Data Architecture
20–40% of your conversion data is missing. Server-side tracking, Consent Mode v2, 18+ events, and engagement scoring bring it back.
Learn more →GDPR & Compliance Audit
We analyze your tracking infrastructure. GDPR score, accessibility check, actionable recommendations.
Learn more →Ready to discuss?
Tell us about your project. We will get back to you within 24 hours.