Count tokens for GPT-4o, GPT-4, Claude, and other LLM models instantly.
Count tokens for GPT-4o, GPT-4, GPT-3.5-turbo, and Claude models simultaneously.
Token counts update instantly as you type or paste text. No waiting required.
Uses the official BPE tokenizer (tiktoken) for precise GPT token counts.
All processing happens in your browser. Your text is never sent to any server.
A token is a chunk of text that language models process. Tokens can be words, parts of words, or even single characters. For example, the word 'hamburger' might be split into 'ham', 'bur', 'ger' — three tokens. On average, one token is roughly 4 characters or 0.75 words in English.
Different models use different tokenizers (encoding schemes). GPT-4o uses o200k_base with a vocabulary of 200,000 tokens, while GPT-4 and GPT-3.5 use cl100k_base with 100,000 tokens. A larger vocabulary means common words are more likely to be single tokens, resulting in lower token counts.
Claude token counts shown here are approximate estimates based on the cl100k_base tokenizer. While Claude uses its own proprietary tokenizer, the counts are generally very close to the actual values and useful for cost estimation and prompt optimization.
Knowing your token count helps you stay within model context limits (e.g., 128K for GPT-4o, 200K for Claude 3.5), estimate API costs, and optimize prompts by removing unnecessary text. Shorter prompts not only cost less but often produce better results.
Our free online token counter helps developers, prompt engineers, and AI enthusiasts accurately count tokens for popular language models. Whether you are building applications with the OpenAI API or Anthropic's Claude API, knowing your token count is essential for managing costs and staying within context window limits.
The tool uses the official BPE (Byte Pair Encoding) tokenizer algorithms to provide accurate counts for GPT models. For Claude models, we provide close approximations. Token counts are calculated in real-time as you type, with support for text in any language including English, Chinese, Japanese, Korean, and more.
All tokenization happens entirely in your browser using WebAssembly and JavaScript — your text is never sent to any server. This makes it safe for counting tokens in confidential prompts, API keys, or sensitive content. No sign-up required.