steven-tey/chathn

Chat with Hacker News using natural language. Built with OpenAI Functions and Vercel AI SDK.

1,182 stars TypeScript 7 components

Converts natural language queries into Hacker News API calls and responses

Users type natural language queries in the chat interface which are sent to the API endpoint. The endpoint applies rate limiting, then sends the conversation history to OpenAI along with function definitions for Hacker News operations. OpenAI decides whether to call functions based on the user's intent, executes the appropriate Hacker News API calls, and streams back a natural language response incorporating the fetched data.

Under the hood, the system uses 2 feedback loops, 2 data pools, 4 control points to manage its runtime behavior.

A 7-component fullstack. 10 files analyzed. Data flows through 5 distinct pipeline stages.

How Data Flows Through the System

Users type natural language queries in the chat interface which are sent to the API endpoint. The endpoint applies rate limiting, then sends the conversation history to OpenAI along with function definitions for Hacker News operations. OpenAI decides whether to call functions based on the user's intent, executes the appropriate Hacker News API calls, and streams back a natural language response incorporating the fetched data.

  1. User input capture — The useChat hook in page.tsx captures user text input and adds it to the messages array as a user role message, then triggers handleSubmit to send the conversation to /api/chat [user text input → ChatMessage]
  2. Rate limit validation — The POST handler in route.ts extracts the client IP and checks against Upstash Redis using a sliding window of 50 requests per day, returning a 429 error if limit exceeded [HTTP request → rate limit decision] (config: KV_REST_API_URL, KV_REST_API_TOKEN)
  3. OpenAI function call setup — The handler sends the conversation messages to OpenAI's chat completions API along with the functions array from functions.ts, which defines 4 HN operations: get_top_stories, get_story, get_story_with_comments, and summarize_top_story [ChatMessage → OpenAI API request] (config: OPENAI_API_KEY)
  4. Function execution routing — When OpenAI decides to call a function, runFunction dispatches to the appropriate implementation (get_top_stories, get_story, etc.) which makes HTTP requests to hacker-news.firebaseio.com API endpoints [OpenAIFunctionCall → HackerNewsStory]
  5. Response streaming — OpenAIStream converts the API response into a streaming format, and StreamingTextResponse sends chunks back to the client where useChat renders them incrementally in the UI [OpenAI API response → StreamingTextResponse]

Data Models

The data structures that flow between stages — the contracts that hold the system together.

ChatMessage ai/react (from Vercel AI SDK)
object with id: string, role: 'user'|'assistant'|'function', content: string, and optional function_call metadata
Created when user submits input, processed by OpenAI API, and streamed back as assistant responses with function call results embedded
HackerNewsStory app/api/chat/functions.ts
object with id: number, title: string, url?: string, score: number, descendants: number, time: number, plus computed hnUrl: string pointing to news.ycombinator.com
Fetched from hacker-news.firebaseio.com API, enriched with HN URL, and returned as function call results to OpenAI
OpenAIFunctionCall app/api/chat/functions.ts
CompletionCreateParams.Function with name: string, description: string, parameters: JSONSchema defining required/optional params
Defined as static schemas, sent to OpenAI to enable function calling, then executed when OpenAI decides to invoke them based on user queries

Hidden Assumptions

Things this code relies on but never validates. These are the things that cause silent failures when the system changes.

critical Scale unguarded

The Hacker News API always returns story IDs in the expected format and the ids.slice(0, limit) will contain valid story IDs that exist when individually fetched

If this fails: If the top stories API returns fewer IDs than requested or contains stale/deleted story IDs, Promise.all will fail on 404 responses from get_story calls, causing the entire function to throw and break the user's request

app/api/chat/functions.ts:get_top_stories
critical Domain unguarded

All Hacker News story items returned by the API will have the expected structure with numeric id field, but stories can be deleted/null without warning

If this fails: When the API returns null for a deleted story, the function tries to destructure null causing 'Cannot read properties of null' errors, breaking the entire conversation flow

app/api/chat/functions.ts:get_story
warning Resource unguarded

Promise.all fetching multiple stories concurrently won't exceed rate limits or connection limits to hacker-news.firebaseio.com

If this fails: With default limit=10, the function makes 11 concurrent HTTP requests (topstories + 10 individual stories). If Hacker News rate limits or the edge runtime has connection limits, some requests fail and Promise.all rejects, breaking the response

app/api/chat/functions.ts:get_top_stories
critical Environment unguarded

The x-forwarded-for header contains a single IP address string that can be used as a unique identifier for rate limiting

If this fails: If the header contains multiple IPs (comma-separated proxy chain) or is spoofed/missing, rate limiting either fails with Redis key errors or allows bypassing limits entirely, potentially enabling abuse

app/api/chat/route.ts:POST
warning Contract unguarded

OpenAI function calls will always return serializable JSON data that can be safely passed back to the AI model

If this fails: If runFunction returns objects with circular references, functions, or other non-serializable data, JSON.stringify in the OpenAI API call fails silently or throws, breaking the streaming response

app/api/chat/route.ts:POST
critical Temporal unguarded

These function names are defined in the functions array but their implementations are missing from the provided code

If this fails: When OpenAI tries to call get_story_with_comments or summarize_top_story based on user queries, runFunction will fail to find the implementation, causing function call errors that break the conversation

app/api/chat/functions.ts:get_story_with_comments and summarize_top_story
warning Ordering weakly guarded

HTTP 429 status responses from the API will always have a body that can be processed by useChat, and the response handler executes before the error handler

If this fails: If a 429 response has no body or malformed data, useChat may still try to process it as a valid chat response while also showing the rate limit toast, leading to confusing UI state with both error and partial response

app/page.tsx:useChat onResponse
warning Resource unguarded

The edge runtime environment supports all the required Node.js APIs used by the OpenAI SDK and Upstash Redis client

If this fails: If the OpenAI SDK or Ratelimit client tries to use Node.js APIs not available in edge runtime, the handler throws runtime errors in production that don't appear in development

app/api/chat/route.ts:runtime = 'edge'
info Scale unguarded

50 requests per day is sufficient for typical usage patterns and the sliding window implementation in Upstash handles timezone boundaries correctly

If this fails: Users hitting the limit early in their timezone day are blocked for up to 24 hours, potentially causing customer churn. Also, if sliding window calculation is off, users might get blocked or allowed incorrectly across day boundaries

app/api/chat/route.ts:Ratelimit.slidingWindow(50, '1 d')
critical Contract unguarded

The runFunction dispatcher exists and properly routes function calls, but its implementation is not shown in the provided code

If this fails: If runFunction doesn't handle unknown function names gracefully or throws errors during function execution, the entire OpenAI streaming response fails and users get no feedback about what went wrong

app/api/chat/functions.ts:runFunction

System Behavior

How the system operates at runtime — where data accumulates, what loops, what waits, and what controls what.

Data Pools

Conversation state (in-memory)
The useChat hook maintains the full conversation history in component state, including user messages, assistant responses, and function call results
Rate limit cache (cache)
Upstash Redis stores request counts per IP address for sliding window rate limiting, tracking usage over 24-hour periods

Feedback Loops

Delays

Control Points

Technology Stack

Next.js 13 (framework)
Provides the full-stack framework with app directory, API routes, and server components for the chat application
Vercel AI SDK (library)
Handles OpenAI API integration, streaming responses, and chat state management with the useChat hook
OpenAI API (library)
Powers function calling to interpret natural language queries and generate responses incorporating Hacker News data
Upstash KV (database)
Redis-compatible key-value store for rate limiting, tracking request counts per IP address
React Markdown (library)
Renders AI-generated responses that include markdown formatting like tables and lists
TailwindCSS (framework)
Utility-first CSS framework for styling the chat interface and responsive layout

Key Components

Explore the interactive analysis

See the full architecture map, data flow, and code patterns visualization.

Analyze on CodeSea

Related Fullstack Repositories

Frequently Asked Questions

What is chathn used for?

Converts natural language queries into Hacker News API calls and responses steven-tey/chathn is a 7-component fullstack written in TypeScript. Data flows through 5 distinct pipeline stages. The codebase contains 10 files.

How is chathn architected?

chathn is organized into 4 architecture layers: Chat Interface, API Gateway, Function Definitions, UI Components. Data flows through 5 distinct pipeline stages. This layered structure keeps concerns separated and modules independent.

How does data flow through chathn?

Data moves through 5 stages: User input capture → Rate limit validation → OpenAI function call setup → Function execution routing → Response streaming. Users type natural language queries in the chat interface which are sent to the API endpoint. The endpoint applies rate limiting, then sends the conversation history to OpenAI along with function definitions for Hacker News operations. OpenAI decides whether to call functions based on the user's intent, executes the appropriate Hacker News API calls, and streams back a natural language response incorporating the fetched data. This pipeline design reflects a complex multi-stage processing system.

What technologies does chathn use?

The core stack includes Next.js 13 (Provides the full-stack framework with app directory, API routes, and server components for the chat application), Vercel AI SDK (Handles OpenAI API integration, streaming responses, and chat state management with the useChat hook), OpenAI API (Powers function calling to interpret natural language queries and generate responses incorporating Hacker News data), Upstash KV (Redis-compatible key-value store for rate limiting, tracking request counts per IP address), React Markdown (Renders AI-generated responses that include markdown formatting like tables and lists), TailwindCSS (Utility-first CSS framework for styling the chat interface and responsive layout). A focused set of dependencies that keeps the build manageable.

What system dynamics does chathn have?

chathn exhibits 2 data pools (Conversation state, Rate limit cache), 2 feedback loops, 4 control points, 2 delays. The feedback loops handle recursive and retry. These runtime behaviors shape how the system responds to load, failures, and configuration changes.

What design patterns does chathn use?

4 design patterns detected: Function calling agent, Streaming responses, Rate limiting with external store, Edge runtime optimization.

Analyzed on April 20, 2026 by CodeSea. Written by .