Open Source

Streamdown: Vercel's Streaming Markdown Renderer

Streamdown is Vercel's streaming Markdown renderer that displays LLM-generated content in real-time with progressive rendering and syntax highlighting.

Keeping this site alive takes effort — your support means everything.
無程式碼也能輕鬆打造專業LINE官方帳號!一鍵導入模板,讓AI助你行銷加分! 無程式碼也能輕鬆打造專業LINE官方帳號!一鍵導入模板,讓AI助你行銷加分!
Streamdown: Vercel's Streaming Markdown Renderer

The rise of LLM-powered chat interfaces has created a peculiar user experience problem: watching text appear character by character is exciting, but watching partially rendered Markdown flicker and jump is frustrating. When an LLM generates a code block, a table, or a nested list, standard Markdown renderers cannot handle the incremental arrival of tokens. They wait for the complete output, then render it all at once – defeating the purpose of streaming. Users stare at raw text until the stream finishes, then the page jumps as everything reformats simultaneously.

Streamdown is Vercel’s elegant solution to this problem. It is an open-source streaming Markdown renderer specifically designed for LLM-generated content. The key insight is that Markdown rendering must happen progressively: each token should be rendered immediately, elements should appear as they become unambiguous, and the DOM should update incrementally without layout instability.

The library is purpose-built for the AI era. Traditional Markdown renderers assume complete, static input. Streamdown assumes incomplete, streaming input and makes intelligent decisions about how to render partial content. When an LLM starts generating a code block, Streamdown renders an open code block container immediately. When it starts a table, it renders the opening table tag and populates cells as they arrive. This creates a smooth, progressive visual experience that matches the streaming nature of LLM responses.

Core Architecture

Streamdown’s internal pipeline processes incoming tokens through four stages:

StageComponentFunction
TokenizerIncremental LexerParse partial Markdown tokens as they arrive
BuilderPartial AST BuilderConstruct an AST that can represent incomplete elements
RendererProgressive DOM RendererUpdate the live DOM with each AST change
FinisherPost-Stream ResolverFinalize any remaining incomplete elements

The Rendering Pipeline

The following diagram illustrates how Streamdown processes a streaming response from an LLM:

Each token from the LLM stream passes through this pipeline in real-time. The incremental lexer is the critical component: it must maintain state between token arrivals so it can correctly identify when a partial sequence like [Click h might be the start of a link ([Click here](url)) and render it appropriately.

Rendering Quality Comparison

The table below compares Streamdown with alternative approaches to rendering LLM output:

ApproachStreaming?Partial SyntaxCode HighlightingTable RenderingBundle Size
StreamdownYes, token-at-a-timeGraceful degradationBackground workerIncremental12 KB (gzip)
react-markdownNo (waits for complete)N/APlugin-basedComplete only15 KB
markedNoN/APlugin-basedComplete only10 KB
Vanilla innerHTMLYes, but unsafeBroken renderingManualBroken0 KB (no deps)
Custom stream rendererPartialUsually brokenManualUsually brokenVaries

Usage Example

Using Streamdown with React and the Vercel AI SDK is straightforward:

import { Streamdown } from '@vercel/streamdown/react';

export function ChatMessage({ content }) {
  return <Streamdown content={content} />;
}

The component automatically handles the streaming input, progressive rendering, and final resolution. For more advanced use cases, Streamdown provides hooks for custom styling, theme integration, and interaction handlers.

Getting Started

Visit the Streamdown GitHub repository for installation instructions, API documentation, and examples. The library is available on npm as @vercel/streamdown and supports all major frontend frameworks. The Vercel AI SDK documentation provides integration guides for combining Streamdown with AI SDK streaming responses.

FAQ

What is Streamdown?

Streamdown is Vercel’s open-source streaming Markdown renderer that displays LLM-generated text token by token as it arrives, with progressive rendering of Markdown elements such as tables, code blocks, lists, and headings.

Why is streaming Markdown rendering hard?

Standard Markdown parsers assume complete input. Streaming Markdown renders tokens incrementally, so the renderer must handle partial syntax – like a table row arriving before its header, or an unclosed code block – and update the DOM incrementally without flickering or layout shifts.

How does Streamdown handle code syntax highlighting during streaming?

Streamdown uses a multi-pass rendering strategy. Code blocks first appear as unstyled plain text as the tokens arrive, then a background worker applies syntax highlighting once the block is complete. This ensures zero perceived latency while eventually showing fully highlighted code.

Can I use Streamdown without React?

Yes. Streamdown is framework-agnostic and exports a vanilla JavaScript API that works with any frontend stack. There are also dedicated integrations for React, Vue, Svelte, and Solid.js with streaming-friendly hooks and components.

What is the difference between Streamdown and standard React Markdown?

Standard React Markdown libraries like react-markdown require complete Markdown input before rendering. Streamdown is designed for incremental rendering: it updates the live DOM token by token, handling partial syntax gracefully, and resolving into properly rendered Markdown as the stream completes.


Further Reading

TAG
CATEGORIES