steven-tey/chathn
Chat with Hacker News using natural language. Built with OpenAI Functions and Vercel AI SDK.
Converts natural language queries into Hacker News API calls and responses
Users type natural language queries in the chat interface which are sent to the API endpoint. The endpoint applies rate limiting, then sends the conversation history to OpenAI along with function definitions for Hacker News operations. OpenAI decides whether to call functions based on the user's intent, executes the appropriate Hacker News API calls, and streams back a natural language response incorporating the fetched data.
Under the hood, the system uses 2 feedback loops, 2 data pools, 4 control points to manage its runtime behavior.
A 7-component fullstack. 10 files analyzed. Data flows through 5 distinct pipeline stages.
How Data Flows Through the System
Users type natural language queries in the chat interface which are sent to the API endpoint. The endpoint applies rate limiting, then sends the conversation history to OpenAI along with function definitions for Hacker News operations. OpenAI decides whether to call functions based on the user's intent, executes the appropriate Hacker News API calls, and streams back a natural language response incorporating the fetched data.
- User input capture — The useChat hook in page.tsx captures user text input and adds it to the messages array as a user role message, then triggers handleSubmit to send the conversation to /api/chat [user text input → ChatMessage]
- Rate limit validation — The POST handler in route.ts extracts the client IP and checks against Upstash Redis using a sliding window of 50 requests per day, returning a 429 error if limit exceeded [HTTP request → rate limit decision] (config: KV_REST_API_URL, KV_REST_API_TOKEN)
- OpenAI function call setup — The handler sends the conversation messages to OpenAI's chat completions API along with the functions array from functions.ts, which defines 4 HN operations: get_top_stories, get_story, get_story_with_comments, and summarize_top_story [ChatMessage → OpenAI API request] (config: OPENAI_API_KEY)
- Function execution routing — When OpenAI decides to call a function, runFunction dispatches to the appropriate implementation (get_top_stories, get_story, etc.) which makes HTTP requests to hacker-news.firebaseio.com API endpoints [OpenAIFunctionCall → HackerNewsStory]
- Response streaming — OpenAIStream converts the API response into a streaming format, and StreamingTextResponse sends chunks back to the client where useChat renders them incrementally in the UI [OpenAI API response → StreamingTextResponse]
Data Models
The data structures that flow between stages — the contracts that hold the system together.
ai/react (from Vercel AI SDK)object with id: string, role: 'user'|'assistant'|'function', content: string, and optional function_call metadata
Created when user submits input, processed by OpenAI API, and streamed back as assistant responses with function call results embedded
app/api/chat/functions.tsobject with id: number, title: string, url?: string, score: number, descendants: number, time: number, plus computed hnUrl: string pointing to news.ycombinator.com
Fetched from hacker-news.firebaseio.com API, enriched with HN URL, and returned as function call results to OpenAI
app/api/chat/functions.tsCompletionCreateParams.Function with name: string, description: string, parameters: JSONSchema defining required/optional params
Defined as static schemas, sent to OpenAI to enable function calling, then executed when OpenAI decides to invoke them based on user queries
Hidden Assumptions
Things this code relies on but never validates. These are the things that cause silent failures when the system changes.
The Hacker News API always returns story IDs in the expected format and the ids.slice(0, limit) will contain valid story IDs that exist when individually fetched
If this fails: If the top stories API returns fewer IDs than requested or contains stale/deleted story IDs, Promise.all will fail on 404 responses from get_story calls, causing the entire function to throw and break the user's request
app/api/chat/functions.ts:get_top_stories
All Hacker News story items returned by the API will have the expected structure with numeric id field, but stories can be deleted/null without warning
If this fails: When the API returns null for a deleted story, the function tries to destructure null causing 'Cannot read properties of null' errors, breaking the entire conversation flow
app/api/chat/functions.ts:get_story
Promise.all fetching multiple stories concurrently won't exceed rate limits or connection limits to hacker-news.firebaseio.com
If this fails: With default limit=10, the function makes 11 concurrent HTTP requests (topstories + 10 individual stories). If Hacker News rate limits or the edge runtime has connection limits, some requests fail and Promise.all rejects, breaking the response
app/api/chat/functions.ts:get_top_stories
The x-forwarded-for header contains a single IP address string that can be used as a unique identifier for rate limiting
If this fails: If the header contains multiple IPs (comma-separated proxy chain) or is spoofed/missing, rate limiting either fails with Redis key errors or allows bypassing limits entirely, potentially enabling abuse
app/api/chat/route.ts:POST
OpenAI function calls will always return serializable JSON data that can be safely passed back to the AI model
If this fails: If runFunction returns objects with circular references, functions, or other non-serializable data, JSON.stringify in the OpenAI API call fails silently or throws, breaking the streaming response
app/api/chat/route.ts:POST
These function names are defined in the functions array but their implementations are missing from the provided code
If this fails: When OpenAI tries to call get_story_with_comments or summarize_top_story based on user queries, runFunction will fail to find the implementation, causing function call errors that break the conversation
app/api/chat/functions.ts:get_story_with_comments and summarize_top_story
HTTP 429 status responses from the API will always have a body that can be processed by useChat, and the response handler executes before the error handler
If this fails: If a 429 response has no body or malformed data, useChat may still try to process it as a valid chat response while also showing the rate limit toast, leading to confusing UI state with both error and partial response
app/page.tsx:useChat onResponse
The edge runtime environment supports all the required Node.js APIs used by the OpenAI SDK and Upstash Redis client
If this fails: If the OpenAI SDK or Ratelimit client tries to use Node.js APIs not available in edge runtime, the handler throws runtime errors in production that don't appear in development
app/api/chat/route.ts:runtime = 'edge'
50 requests per day is sufficient for typical usage patterns and the sliding window implementation in Upstash handles timezone boundaries correctly
If this fails: Users hitting the limit early in their timezone day are blocked for up to 24 hours, potentially causing customer churn. Also, if sliding window calculation is off, users might get blocked or allowed incorrectly across day boundaries
app/api/chat/route.ts:Ratelimit.slidingWindow(50, '1 d')
The runFunction dispatcher exists and properly routes function calls, but its implementation is not shown in the provided code
If this fails: If runFunction doesn't handle unknown function names gracefully or throws errors during function execution, the entire OpenAI streaming response fails and users get no feedback about what went wrong
app/api/chat/functions.ts:runFunction
System Behavior
How the system operates at runtime — where data accumulates, what loops, what waits, and what controls what.
Data Pools
The useChat hook maintains the full conversation history in component state, including user messages, assistant responses, and function call results
Upstash Redis stores request counts per IP address for sliding window rate limiting, tracking usage over 24-hour periods
Feedback Loops
- Function call resolution (recursive, reinforcing) — Trigger: OpenAI decides to call a function based on user query. Action: Execute the function (e.g., fetch HN stories), return results to OpenAI, which then generates a natural language response incorporating the data. Exit: OpenAI produces final text response without additional function calls.
- Error retry prompting (retry, balancing) — Trigger: Rate limit hit or API error occurs. Action: Display error toast to user via sonner, track error event with Vercel Analytics. Exit: User can retry their request after understanding the error.
Delays
- OpenAI API latency (async-processing, ~1-5 seconds) — Loading indicator shows while waiting for function calls and response generation, with streaming providing incremental updates
- Hacker News API calls (async-processing, ~200-500ms per story) — Multiple concurrent fetches when getting top stories, with Promise.all batching the requests
Control Points
- Rate limit threshold (threshold) — Controls: Maximum requests per IP per day (currently 50). Default: 50
- Story fetch limit (threshold) — Controls: How many top stories to fetch when user asks for top stories. Default: 10 (default)
- Runtime environment (env-var) — Controls: Whether rate limiting is enabled (disabled in development). Default: NODE_ENV
- OpenAI API key (env-var) — Controls: Authentication for OpenAI function calling and completions. Default: OPENAI_API_KEY
Technology Stack
Provides the full-stack framework with app directory, API routes, and server components for the chat application
Handles OpenAI API integration, streaming responses, and chat state management with the useChat hook
Powers function calling to interpret natural language queries and generate responses incorporating Hacker News data
Redis-compatible key-value store for rate limiting, tracking request counts per IP address
Renders AI-generated responses that include markdown formatting like tables and lists
Utility-first CSS framework for styling the chat interface and responsive layout
Key Components
- useChat (orchestrator) — Manages conversation state, handles form submission, and streams AI responses from the chat API endpoint while tracking analytics events
app/page.tsx - POST handler (gateway) — Processes incoming chat requests, applies rate limiting via Upstash, calls OpenAI API with function definitions, and streams back responses
app/api/chat/route.ts - functions array (registry) — Defines the OpenAI function schemas that map natural language intents to specific Hacker News operations like getting top stories or story details
app/api/chat/functions.ts - get_top_stories (adapter) — Fetches top story IDs from Hacker News API, then fetches full story details for each one up to the specified limit
app/api/chat/functions.ts - get_story (adapter) — Fetches a single story by ID from the Hacker News API and enriches it with the canonical HN URL
app/api/chat/functions.ts - runFunction (dispatcher) — Routes OpenAI function calls to their corresponding implementation functions based on function name
app/api/chat/functions.ts - Ratelimit (validator) — Enforces sliding window rate limiting (50 requests per day per IP) using Upstash Redis to prevent API abuse
app/api/chat/route.ts
Explore the interactive analysis
See the full architecture map, data flow, and code patterns visualization.
Analyze on CodeSeaRelated Fullstack Repositories
Frequently Asked Questions
What is chathn used for?
Converts natural language queries into Hacker News API calls and responses steven-tey/chathn is a 7-component fullstack written in TypeScript. Data flows through 5 distinct pipeline stages. The codebase contains 10 files.
How is chathn architected?
chathn is organized into 4 architecture layers: Chat Interface, API Gateway, Function Definitions, UI Components. Data flows through 5 distinct pipeline stages. This layered structure keeps concerns separated and modules independent.
How does data flow through chathn?
Data moves through 5 stages: User input capture → Rate limit validation → OpenAI function call setup → Function execution routing → Response streaming. Users type natural language queries in the chat interface which are sent to the API endpoint. The endpoint applies rate limiting, then sends the conversation history to OpenAI along with function definitions for Hacker News operations. OpenAI decides whether to call functions based on the user's intent, executes the appropriate Hacker News API calls, and streams back a natural language response incorporating the fetched data. This pipeline design reflects a complex multi-stage processing system.
What technologies does chathn use?
The core stack includes Next.js 13 (Provides the full-stack framework with app directory, API routes, and server components for the chat application), Vercel AI SDK (Handles OpenAI API integration, streaming responses, and chat state management with the useChat hook), OpenAI API (Powers function calling to interpret natural language queries and generate responses incorporating Hacker News data), Upstash KV (Redis-compatible key-value store for rate limiting, tracking request counts per IP address), React Markdown (Renders AI-generated responses that include markdown formatting like tables and lists), TailwindCSS (Utility-first CSS framework for styling the chat interface and responsive layout). A focused set of dependencies that keeps the build manageable.
What system dynamics does chathn have?
chathn exhibits 2 data pools (Conversation state, Rate limit cache), 2 feedback loops, 4 control points, 2 delays. The feedback loops handle recursive and retry. These runtime behaviors shape how the system responds to load, failures, and configuration changes.
What design patterns does chathn use?
4 design patterns detected: Function calling agent, Streaming responses, Rate limiting with external store, Edge runtime optimization.
Analyzed on April 20, 2026 by CodeSea. Written by Karolina Sarna.