Securing AI-Generated Code (Replit)

A technical deep-dive into how Replit secures AI-generated code (Vibe Coding) using a hybrid approach that combines LLM reasoning with deterministic security tools.

Core Findings

  1. AI-Only Scans are Nondeterministic: Identical vulnerabilities receive different classifications based on minor syntactic changes or variable naming.
  2. Prompt Sensitivity: Detection coverage depends on what specific issues are mentioned in the prompt, shifting the security burden from the tool to the user.
  3. Dependency Blindness: LLMs cannot reliably identify version-specific CVEs without access to continuous vulnerability feeds.

The Hybrid Architecture

The blog advocates for a “Hybrid Security Layer” for AI agents:

  • Deterministic Baseline: Static analysis (SAST) and dependency scanning provide consistent, repeatable detection.
  • LLM Reasoning: Used for business logic, intent-level issues, and complex vulnerability patterns that rules-based systems might miss.

Decision-Time Guidance

Instead of front-loading all rules (which creates context bloat and noise), Replit uses Decision-Time Guidance. This involves injecting situational instructions at key moments when the agent is making critical decisions.

Connection to this Project

  • Supports the Agentic DevOps philosophy of “Governance-First” security.
  • Validates the need for external tooling (like ctx7 or SAST) alongside the core AI agent to ensure technical accuracy and security.

Synthesized into: AI-Security