About the Position

We are looking for a Next.js Developer who works AI-first: Claude, Cursor, Windsurf, or similar tools are your primary development instrument, not a fallback. You will build modern web applications with Next.js and TypeScript — first in sandbox projects under the guidance of engineers and AI capability leads, then on paid commercial projects with full responsibility for deadlines, quality, and deliverables.

Why TypeScript and Next.js. Strict typing catches entire classes of bugs at compile time. The AI agent must reason about data shapes, null checks, and type boundaries before code runs.

An engineer with AI knows why the code works — and can prove it through specifications, tests, and traceability. A vibe coder knows that it works, until it doesn’t. We hire engineers.

For details about how our internship works, check out our Internship Overview.
Learn more about our team at https://foreachpartners.com/.

How We Work

Our engineering philosophy is Parsimony-Driven Development (PDD) — every artifact, instruction, and line of code must justify its existence. If removing it introduces no ambiguity and loses no meaning, it should not be there.

We implement PDD through Specification-Driven Development (SDD) — specification precedes implementation. Before writing code, you formulate requirements, derive contracts, and only then implement. AI tools accelerate every stage but replace none.

SDD rests on four pillars: Traceability, DRY, Deterministic Enforcement, and Parsimony. You MUST read and understand these principles before starting the test task:

Required reading: Specification-Driven Development: Four Pillars

AI generates code. You are responsible for it. The competence we evaluate is not how fast you produce output — it is how well you verify correctness, enforce constraints, and catch what the model gets wrong. The operator owns the result, not the agent.

What You Will Do

  • Build web applications and internal tools using Next.js, TypeScript, and React
  • Write specifications before UI — component contracts, data models, API schemas
  • Implement responsive, accessible interfaces that follow design specifications
  • Review and correct AI-generated frontend code for correctness, performance, and accessibility
  • Maintain traceability between requirements, components, and tests

What We’re Looking For

  • AI Proficiency:
    • Confident user of at least one AI IDE (Cursor, Windsurf, or Claude Code)
    • Understanding of prompt engineering: how to structure instructions, provide context, and iterate on output
    • Understanding of context engineering: what to include in the model’s context and what to leave out
    • Ability to decompose tasks for AI and critically evaluate output
  • Frontend Foundations:
    • Basic understanding of React and TypeScript
    • Familiarity with component-based architecture
    • Pet project experience is sufficient
  • Engineering Mindset:
    • Understanding of why specifications matter
    • Comfort with Git, npm/pnpm, and browser dev tools
  • Nice-to-Have:
    • Experience with Next.js App Router, Server Components
    • Familiarity with Tailwind CSS, testing libraries (Jest, Playwright)
    • Understanding of REST API design

Why Apply?

  • AI-First Culture: Work in a team where AI tools are the norm, not an experiment
  • Structured Growth: Start in sandbox projects, prove your quality control skills, then move to paid commercial work
  • Career Path: Outstanding interns transition to permanent roles with full engineering responsibility

Test Task

To apply, complete the test task below. This is how we evaluate your ability to work with AI tools on a real engineering problem.

You MUST use AI (Claude, Cursor, Windsurf, or similar) as your primary development tool. Manual coding without AI assistance is not what we’re evaluating.

Time budget: 4–6 hours with AI tools.

The Big Picture

You are building one piece of a larger SDD toolchain. Multiple interns across different specializations work on the same product:

  • Rust service — scans codebases, computes traceability metrics, serves data via REST API
  • Next.js dashboard (your task) — web interface consuming the API
  • Flutter app — mobile interface consuming the API
  • DevOps infrastructure — deploys and monitors the whole stack

All parts share a common API specification: SDD Navigator API · download YAML


SDD Navigator Dashboard

Build a Next.js web application that consumes the SDD Navigator API and displays an interactive dashboard showing specification coverage for a project: which requirements are implemented, tested, and which remain unaddressed.

Since the Rust backend may not be available yet, the app MUST work in two modes:

  • API mode: fetches data from the live backend (NEXT_PUBLIC_API_URL env variable)
  • Mock mode (default for development): loads data from local JSON files that match the API response schema

Step 1: Write the specification first

Before writing any code, create a requirements.yaml for the dashboard itself. Define at least 8 requirements covering: API integration, data display, filtering, sorting, accessibility, error/loading states, deployment, and theming. Each requirement gets a unique ID (e.g., SCD-UI-001).

Each requirement MUST include a description field — a MUST/SHOULD directive that can be verified. Example:

- id: SCD-UI-001
title: Load project stats from /stats endpoint
description: Dashboard MUST fetch RequirementStats, AnnotationStats, and TaskStats from GET /stats on initial load and display them in the summary panel.

This file becomes the single source of truth for what the app does.

Step 2: Implement the data layer

Study the SDD Navigator API spec and implement a TypeScript client module (lib/api.ts or similar):

  • Typed API client matching the OpenAPI schema (no any, strict TypeScript)
  • Functions for each endpoint:
    • getStats() — fetches GET /stats, returns Stats (RequirementStats, AnnotationStats, TaskStats, coverage, lastScanAt)
    • listRequirements(filters?) — fetches GET /requirements with optional type (FR/AR), status (covered/partial/missing), sort, and order query params
    • getRequirement(id) — fetches GET /requirements/{id}, returns RequirementDetail with linked annotations and tasks
    • listAnnotations(filters?) — fetches GET /annotations with optional type and orphans params
    • listTasks(filters?) — fetches GET /tasks with optional status, orphans, sort, and order params
    • triggerScan() — sends POST /scan, returns ScanStatus
    • getScanStatus() — fetches GET /scan, returns ScanStatus
  • Mock data provider: loads from local JSON files (data/*.json) matching API response shapes
  • Switchable via environment variable or config
  • Error handling: typed errors for network failures, 404s, malformed responses — no thrown exceptions for expected failures

Provide mock data matching the sample data structure: 8 requirements (FR-SCAN-001 through FR-API-003, AR-PERF-001, AR-SEC-001) with types FR/AR and statuses covered/partial/missing, 16 annotations (14 linked + 2 orphans), and 6 tasks (5 linked + 1 orphan). Use the stats: coverage 62.5%, 16 total annotations (10 impl, 6 test, 2 orphans), 6 tasks (2 done, 1 in_progress, 3 open, 1 orphan).

Step 3: Build the dashboard

Pages and components:

  1. Summary panel (top of page):

    • Total requirements count
    • Breakdown by type: FR / AR — as numbers
    • Breakdown by coverage status: covered / partial / missing — as numbers and visual bars or chart
    • Overall coverage percentage with a progress indicator
    • Annotation orphan count and task orphan count (if any) with a warning indicator
    • Timestamp of the last scan (lastScanAt)
  2. Requirements table:

    • Columns: ID, type, title, status, updatedAt
    • Sortable by ID or updatedAt (ascending / descending toggle)
    • Filterable by:
      • Type (FR / AR) — multi-select chips
      • Coverage status (covered / partial / missing) — multi-select chips
    • URL-synced filters: filter state reflected in query params (?type=FR&status=missing), so filtered views are shareable
    • Empty state when filters produce no results
  3. Requirement detail (expand row or navigate):

    • All fields: id, type, title, description, status, createdAt, updatedAt
    • Linked annotations: file path, line number, type (impl / test), code snippet
    • Linked tasks: id, title, status, assignee (if present), updatedAt
    • Coverage assessment label: “Fully covered”, “Needs tests”, “Not implemented”
    • Link back to table with current filter preserved
  4. Tasks panel:

    • Separate section listing all work items from GET /tasks
    • Columns: ID, requirement ID, title, status, assignee
    • Filterable by task status (open / in_progress / done)
    • Orphan tasks (requirementId not in requirements.yaml) highlighted distinctly
  5. Orphan panel:

    • Separate section (collapsible) for annotation orphans: file, line, unknown reqId, type
    • Separate subsection for task orphans: task id, title, unknown requirementId
    • Both types visible in one place
  6. Theme:

    • Dark / light toggle, persisted in localStorage
    • Respects prefers-color-scheme on first visit

Technical requirements:

  • Next.js 14+ with App Router
  • TypeScript strict mode (strict: true in tsconfig.json)
  • Server Components for data loading, Client Components for interactivity
  • Responsive: desktop shows table, mobile shows card layout
  • Semantic HTML, keyboard-navigable table, aria-sort on sortable columns, sufficient color contrast (WCAG AA)
  • No external database — all data loaded from files on the server side

Step 4: Write comprehensive tests

  • Data layer tests: valid data, malformed YAML, malformed JSON, empty files, orphan detection for both annotations and tasks, 0% coverage, 100% coverage, partial coverage edge cases
  • Component tests: summary panel renders correct counts (requirements, annotations, tasks), table renders correct rows, filter by type and status produces correct subset, sort by id and updatedAt changes order, detail view shows both annotations and tasks, tasks panel renders and filters correctly
  • Every test MUST have a // @req SCD-XXX-NNN comment referencing which requirement from Step 1 it verifies

Step 5: Self-validation and deployment

Deterministic checks (the developer MUST run before submitting):

  • tsc --noEmit — type check
  • ESLint — lint
  • All tests
  • next build — production build
  • Self-validation script (scripts/check-coverage.ts): parses the project’s own requirements.yaml and scans source for @req annotations, prints a coverage report, exits with code 1 if any requirement is unimplemented

Deployment:

  • Deploy to Vercel, Netlify, or similar
  • Provide the live URL in README.md

Deliverables

Provide a link to a public GitHub repository containing:

  • Full source code
  • Sample data files (8+ requirements, 16+ annotations, 6+ tasks)
  • requirements.yaml for the dashboard itself (with description on every entry)
  • Deployed URL of the working dashboard
  • README.md: what the app does, how to run locally, how to deploy
  • PROCESS.md — your AI development process artifact (see below)

How We Evaluate

We are fully transparent about evaluation. Below are the exact prompts we use.

Step 1: You generate PROCESS.md

After completing the task, run the following prompt against your full AI conversation history. Commit the output as PROCESS.md in the repository root.

Analyze all AI conversations used during development of this project.
For each conversation, extract timestamps (start time, end time) from the chat metadata.

Produce a markdown document PROCESS.md with the following sections:

1. **Tools Used** — which AI tools (IDE, model, plugins) the developer used and for what.
2. **Conversation Log** — for each AI session: start/end timestamps, topic, what the
developer asked for, what was accepted, what was rejected or corrected.
3. **Timeline** — chronological list of major steps with timestamps and duration.
4. **Key Decisions** — what architectural and implementation choices the developer made,
and why. What alternatives were considered?
5. **What the Developer Controlled** — which parts of the output the developer reviewed,
tested, or rewrote. Be specific: list files, functions, and config sections.
What verification steps did the developer take before accepting AI output?
6. **Course Corrections** — moments where the developer identified incorrect, incomplete,
or suboptimal AI output and changed direction. What was the issue, how was it caught,
and what did the developer do instead?
7. **Self-Assessment** — which SDD pillars (Traceability, DRY, Deterministic Enforcement,
Parsimony) are well-covered in the submission and which need improvement.

Step 2: We evaluate your repository

We run the following prompt against your submission. You can run it yourself before submitting:

Evaluate this repository against the SDD (Specification-Driven Development) four pillars:

1. **Traceability**: Do commits reference requirement IDs? Do components and tests link
to requirements via @req annotations? Is there a requirements.yaml with description
on every entry? Are there orphan annotations or unimplemented requirements?

2. **DRY**: Are data model types defined once in the coverage module and imported
everywhere? Is filter logic shared, not duplicated? Are there copied type definitions
between server and client code?

3. **Deterministic Enforcement**: Are tsc, ESLint, and tests used to verify correctness?
Is there a self-validation script that checks traceability? Can any check be automated
further? Are there manual verification steps that could be scripted?

4. **Parsimony**: Are dependencies minimal and justified? Is there a CSS framework that
adds no value? Are there boilerplate abstractions or unused modules? Is the README
concise and factual?

For each pillar: rate as PASS / PARTIAL / FAIL with specific file references and line
numbers. Produce a summary table and a list of concrete violations.

A good submission is honest, not polished. We value a candidate who catches AI mistakes over one who ships fast without checking.


If you feel overwhelmed by the volume of new concepts here — that is normal. What we describe is the cutting edge of AI-assisted engineering. These are not widely known practices yet. Open Cursor or Claude, and study this material together with your AI tool. Just remember: it is you who is learning, not your agent. Move to practice as quickly as possible — only hands-on work turns information into applicable skill.

We look forward to seeing how you build with AI — and how you think about what AI builds for you.