Skip to main content
NEKOD
Back to Blog
ArticleApril 4, 20268 min read

We built our own website with Claude Code: Then scanned it with NEKOD

We used AI to build our own website, then ran our 360° assessment on it. Here is every vulnerability we found and how we fixed them.

By Antigoni Kourou
We built our own website with Claude Code: Then scanned it with NEKOD

We built nekod.com (this very website you are reading) with Claude Code. Then scanned it for readiness before launch. We found 6 security issues, 2 of them critical. Here is what happened.

Why We Did This

NEKOD is a governance platform for AI-generated apps. If we are going to tell other people their vibe-coded apps need scanning, the least we can do is scan our own. So that is exactly what we did.

We built nekod.co using Claude Code, Anthropic's AI coding assistant. The entire website, from the Next.js frontend to the Sanity CMS integration to the contact form API, was built with AI-assisted development. It took days, not months. And like every vibe-coded app, it needed governance before going live.

The Stack

Our website runs on Next.js 14 with the App Router, TypeScript, and Tailwind CSS. Blog content is managed through Sanity.io as a headless CMS. The contact form uses the Resend API for email delivery. Everything is deployed on Vercel. The entire codebase was generated and iterated on using Claude Code.

What We Found: The Critical Issues

1. HTML Injection in Email Templates

Severity: Critical. Our contact form accepted user input and placed it directly into an HTML email template without escaping. A user submitting <script>alert('xss')</script> as their name would inject HTML into the email received by our team. While email clients have protections, this is a textbook injection vulnerability.

The fix: We added an escapeHtml() utility function that sanitises all user input before it enters the email template. Every field (name, email, company, message) is now escaped for HTML special characters.

2. Hardcoded Email Recipient

Severity: High. The contact form recipient email was hardcoded in the source code. This is both a security concern (email exposed in compiled bundles) and an operational one (changing the recipient requires a code deployment). We moved it to an environment variable with a fallback.

What We Found: The Medium Issues

3. In-Memory Rate Limiting

Our contact form had rate limiting, which is good. But it used an in-memory Map, which resets on every server restart and does not persist across serverless function instances on Vercel. In practice, this means the rate limit is easy to bypass. For our current traffic this is acceptable, but for production at scale, a Redis-based solution like Upstash would be the right call.

4. TypeScript Build Errors Suppressed

Our next.config.mjs had ignoreBuildErrors: true for TypeScript. This is a common pattern when moving fast with AI-generated code, the AI produces code that works but may have type mismatches that are tedious to resolve. The risk is that genuine type errors get silently shipped. We flagged this for future cleanup.

5. Verbose Error Logging

Our API error handler was logging the full error object to console. In production, this could expose stack traces, internal file paths, or API details to anyone with access to logs. We sanitised it to only log the error message string.

6. IP Spoofing Risk in Rate Limiter

The rate limiter extracted client IPs from the X-Forwarded-For header, which is user-controllable. An attacker could bypass rate limits by spoofing this header. Vercel's proxy layer mitigates this somewhat, but it is still a defense-in-depth gap.

What We Got Right

Not everything was a finding. Several areas came back clean: Zod schema validation on all contact form inputs with proper type checking and constraints. TypeScript strict mode enabled across the project. Safe Portable Text rendering for blog content using the official Sanity library. No hardcoded API keys or secrets anywhere in the source code. Environment variables properly managed with .env.local in .gitignore.

The Pattern We See in Every Vibe-Coded App

Our website is a small, relatively simple application. And we still found 6 issues, including 2 critical ones. This is not because AI writes bad code. It is because AI optimises for functionality, not for the edge cases that security and compliance require.

Every vibe-coded app we have assessed shows the same pattern: the happy path works perfectly. The code is clean, well-structured, and functional. But the security boundaries, the error handling edge cases, the compliance requirements, these are where gaps appear. Not because the AI cannot handle them, but because nobody asked it to.

The Takeaway

If you are building with AI (and in 2026, you should be) scanning before launch is not optional. It is the difference between shipping with confidence and shipping with crossed fingers. We found issues in our own code. You will find them in yours. The question is whether you find them before or after your users do.

Get your free scan at nekod.co and find out what is hiding in your vibe-coded app.

Ready to secure your vibe coded apps?

Get a free assessment of your vibe-coded application and discover what needs attention before launch.