Integrate 200+ LLMs with one TypeScript SDK using OpenAI's format. Free and open source. No proxy server required.
Features
Use OpenAI's format to call 200+ LLMs from 10 providers.
Supports tools, JSON outputs, image inputs, streaming, and more.
Runs completely on the client side. No proxy server needed.
Free and open source under MIT.
Supported Providers
AI21
Anthropic
AWS Bedrock
Cohere
Gemini
Groq
Mistral
OpenAI
Perplexity
OpenRouter
Setup
Installation
npminstalltoken.js
pnpminstalltoken.js
yarnaddtoken.js
bunaddtoken.js
Usage
Import the Token.js client and call the create function with a prompt in OpenAI's format. Specify the model and LLM provider using their respective fields.
.env
OPENAI_API_KEY=<openaiapikey>
import { TokenJS } from'token.js'// Create the Token.js clientconsttokenjs=newTokenJS()asyncfunctionmain() {// Create a model responseconstcompletion=awaittokenjs.chat.completions.create({// Specify the provider and model provider:'openai', model:'gpt-4o',// Define your message messages: [ { role:'user', content:'Hello!', }, ], })console.log(completion.choices[0])}main()
.env
ANTHROPIC_API_KEY=<anthropicapikey>
import { TokenJS } from'token.js'// Create the Token.js clientconsttokenjs=newTokenJS()asyncfunctionmain() {// Create a model responseconstcompletion=awaittokenjs.chat.completions.create({// Specify the provider and model provider:'anthropic', model:'claude-3-sonnet-20240229',// Define your message messages: [ { role:'user', content:'Hello!', }, ], })console.log(completion.choices[0])}main()
.env
GEMINI_API_KEY=<geminiapikey>
import { TokenJS } from'token.js'// Create the Token.js clientconsttokenjs=newTokenJS()asyncfunctionmain() {// Create a model responseconstcompletion=awaittokenjs.chat.completions.create({// Specify the provider and model provider:'gemini', model:'gemini-1.5-pro',// Define your message messages: [ { role:'user', content:'Hello!', }, ], })console.log(completion.choices[0])}main()
import { TokenJS } from'token.js'// Create the Token.js clientconsttokenjs=newTokenJS()asyncfunctionmain() {// Create a model responseconstcompletion=awaittokenjs.chat.completions.create({// Specify the provider and model provider:'bedrock', model:'meta.llama3-70b-instruct-v1:0',// Define your message messages: [ { role:'user', content:'Hello!', }, ], })console.log(completion.choices[0])}main()
.env
COHERE_API_KEY=<cohereapikey>
import { TokenJS } from'token.js'// Create the Token.js clientconsttokenjs=newTokenJS()asyncfunctionmain() {// Create a model responseconstcompletion=awaittokenjs.chat.completions.create({// Specify the provider and model provider:'cohere', model:'command-r-plus',// Define your message messages: [ { role:'user', content:'Hello!', }, ], })console.log(completion.choices[0])}main()
.env
MISTRAL_API_KEY=<mistralapikey>
import { TokenJS } from'token.js'// Create the Token.js clientconsttokenjs=newTokenJS()asyncfunctionmain() {// Create a model responseconstcompletion=awaittokenjs.chat.completions.create({// Specify the provider and model provider:'mistral', model:'open-mixtral-8x22b',// Define your message messages: [ { role:'user', content:'Hello!', }, ], })console.log(completion.choices[0])}main()
.env
OPENROUTER_API_KEY=<openrouterapikey>
import { TokenJS } from'token.js'// Create the Token.js clientconsttokenjs=newTokenJS()asyncfunctionmain() {// Create a model responseconstcompletion=awaittokenjs.chat.completions.create({// Specify the provider and model provider:'openrouter', model:'nvidia/nemotron-4-340b-instruct',// Define your message messages: [ { role:'user', content:'Hello!', }, ], })console.log(completion.choices[0])}main()
Access Credentials
We recommend using environment variables to configure the credentials for each LLM provider.
Token.js supports the function calling tool for all providers and models that offer it.
import { TokenJS, ChatCompletionTool } from'token.js'consttokenjs=newTokenJS()asyncfunctionmain() {consttools:ChatCompletionTool[] = [ { type:'function', function: { name:'get_current_weather', description:'Get the current weather in a given location', parameters: { type:'object', properties: { location: { type:'string', description:'The city and state, e.g. San Francisco, CA', }, }, required: ['location'], }, }, }, ]constresult=awaittokenjs.chat.completions.create({ provider:'gemini', model:'gemini-1.5-pro', messages: [ { role:'user', content:`What's the weather like in San Francisco?`, }, ], tools, tool_choice:'auto', })console.log(result.choices[0].message.tool_calls)}main()
Feature Compatibility
This table provides an overview of the features that Token.js supports from each LLM provider.
Provider
Chat Completion
Streaming
Function Calling Tool
JSON Output
Image Input
OpenAI
Anthropic
Bedrock
Mistral
Cohere
AI21
Gemini
Groq
Perplexity
Legend
Symbol
Description
Supported by Token.js
Not supported by the LLM provider, so Token.js cannot support it
Note: Certain LLMs, particularly older or weaker models, do not support some features in this table. For details about these restrictions, see our LLM provider documentation.
License
Token.js is free and open source software licensed under MIT.