Integrate 200+ LLMs with one TypeScript SDK using OpenAI's format. Free and open source. No proxy server required.
Features
Use OpenAI's format to call 200+ LLMs from 10 providers.
Supports tools, JSON outputs, image inputs, streaming, and more.
Runs completely on the client side. No proxy server needed.
Free and open source under MIT.
Supported Providers
AI21
Anthropic
AWS Bedrock
Cohere
Gemini
Groq
Mistral
OpenAI
Perplexity
OpenRouter
Any other model provider with an OpenAI compatible API
Setup
Installation
npm install token.js
pnpm install token.js
yarn add token.js
bun add token.js
Usage
Import the Token.js client and call the create function with a prompt in OpenAI's format. Specify the model and LLM provider using their respective fields.
.env
OPENAI_API_KEY=<openai api key>
import { TokenJS } from 'token.js'
// Create the Token.js client
const tokenjs = new TokenJS()
async function main() {
// Create a model response
const completion = await tokenjs.chat.completions.create({
// Specify the provider and model
provider: 'openai',
model: 'gpt-4o',
// Define your message
messages: [
{
role: 'user',
content: 'Hello!',
},
],
})
console.log(completion.choices[0])
}
main()
.env
ANTHROPIC_API_KEY=<anthropic api key>
import { TokenJS } from 'token.js'
// Create the Token.js client
const tokenjs = new TokenJS()
async function main() {
// Create a model response
const completion = await tokenjs.chat.completions.create({
// Specify the provider and model
provider: 'anthropic',
model: 'claude-3-sonnet-20240229',
// Define your message
messages: [
{
role: 'user',
content: 'Hello!',
},
],
})
console.log(completion.choices[0])
}
main()
.env
GEMINI_API_KEY=<gemini api key>
import { TokenJS } from 'token.js'
// Create the Token.js client
const tokenjs = new TokenJS()
async function main() {
// Create a model response
const completion = await tokenjs.chat.completions.create({
// Specify the provider and model
provider: 'gemini',
model: 'gemini-1.5-pro',
// Define your message
messages: [
{
role: 'user',
content: 'Hello!',
},
],
})
console.log(completion.choices[0])
}
main()
Token.js supports streaming responses for all providers that offer it.
import { TokenJS } from 'token.js'
const tokenjs = new TokenJS()
async function main() {
const result = await tokenjs.chat.completions.create({
stream: true,
provider: 'openai',
model: 'gpt-4o',
messages: [
{
role: 'user',
content: `Tell me about yourself.`,
},
],
})
for await (const part of result) {
process.stdout.write(part.choices[0]?.delta?.content || '')
}
}
main()
Function Calling
Token.js supports the function calling tool for all providers and models that offer it.
import { TokenJS, ChatCompletionTool } from 'token.js'
const tokenjs = new TokenJS()
async function main() {
const tools: ChatCompletionTool[] = [
{
type: 'function',
function: {
name: 'get_current_weather',
description: 'Get the current weather in a given location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
},
required: ['location'],
},
},
},
]
const result = await tokenjs.chat.completions.create({
provider: 'gemini',
model: 'gemini-1.5-pro',
messages: [
{
role: 'user',
content: `What's the weather like in San Francisco?`,
},
],
tools,
tool_choice: 'auto',
})
console.log(result.choices[0].message.tool_calls)
}
main()
Feature Compatibility
This table provides an overview of the features that Token.js supports from each LLM provider.
Provider
Chat Completion
Streaming
Function Calling Tool
JSON Output
Image Input
OpenAI
✅
✅
✅
✅
✅
Anthropic
✅
✅
✅
➖
➖
Bedrock
✅
✅
✅
✅
✅
Mistral
✅
✅
✅
✅
➖
Cohere
✅
✅
✅
➖
➖
AI21
✅
✅
➖
➖
➖
Gemini
✅
✅
✅
✅
✅
Groq
✅
✅
➖
✅
➖
Perplexity
✅
✅
➖
➖
➖
OpenRouter
✅
✅
✅
✅
✅
OpenAI Compatible
✅
✅
✅
✅
✅
Legend
Symbol
Description
✅
Supported by Token.js
➖
Not supported by the LLM provider, so Token.js cannot support it
Note: Certain LLMs, particularly older or weaker models, do not support some features in this table. For details about these restrictions, see our LLM provider documentation.
License
Token.js is free and open source software licensed under MIT.