Skip to content

zerob13/modelinfo-cli

Repository files navigation

modelinfo — a fast CLI to explore AI model capabilities, pricing, limits, and provider metadata.

npm version npm downloads CI license node bun

modelinfo is a Bun-first TypeScript CLI for exploring AI model metadata from PublicProviderConf. It keeps a local normalized index for fast lookups, ships with a bundled seed dataset so a fresh install can work offline, and refreshes from upstream when the version file changes.

Links:

Features

  • Fast default lookup with local normalized index
  • Fuzzy search across models and providers
  • Provider listing and per-provider model listing
  • Side-by-side model diff
  • Focused capability, cost, and limit views
  • Local cache with bundled seed files
  • Auto refresh using the upstream version file

Requirements

  • Bun 1.0+
  • Node.js 20+ for running the published CLI package

Install

Install globally:

npm install -g modelinfo

Or run without installing:

bunx modelinfo gpt-4o
# or
npx modelinfo gpt-4o

For local development:

bun install

Install as Agent Skill

Use with AI coding agents (Claude Code, Cursor, Codex, etc.):

npx skills add zerob13/modelinfo-cli

See skills.sh for more information.

Development

bun run build:seed
bun run build
bun run dev -- --help
bun run test
bun run lint

The first local cache is copied from the packaged seed/ directory into ~/.modelinfo/. After that, modelinfo uses ~/.modelinfo/index.json for normal reads and checks the remote version file periodically.

Usage

modelinfo gpt-4o
modelinfo search qwen
modelinfo providers
modelinfo list 302ai
modelinfo diff gpt-4o gpt-4
modelinfo caps gpt-4o
modelinfo cost gpt-4o
modelinfo limit gpt-4o
modelinfo update
modelinfo doctor
modelinfo gpt-4o --output json

If you prefer the web UI for the same dataset, use models.anya2a.com.

Provider filters

Most model commands accept --provider to narrow duplicate model ids:

modelinfo gpt-4o --provider openai
modelinfo search gpt --provider openrouter
modelinfo cost gpt-4o --provider openai
modelinfo diff gpt-4o gpt-4.1 --provider-a openai --provider-b openai
modelinfo search qwen --provider openrouter --output json

Cache layout

~/.modelinfo/
  all.json
  version.json
  index.json

Bundled seed files are published inside the npm package:

seed/
  all.json
  version.json
  index.json

Commands

  • modelinfo <model>: resolve and show a model
  • modelinfo search <keyword>: fuzzy search models
  • modelinfo providers: list providers
  • modelinfo list <provider>: list models under a provider
  • modelinfo update: refresh local cache if upstream changed
  • modelinfo diff <modelA> <modelB>: compare two models
  • modelinfo caps <model>: show capabilities only
  • modelinfo cost <model>: show pricing only
  • modelinfo limit <model>: show token limits only
  • modelinfo doctor: show cache and version status

Publish

bun release

bun release is an interactive release flow. It lets you choose the next version, publish to latest or beta, runs checks, creates a release commit and git tag, then publishes to npm.

For a manual publish without the helper:

bun run build:seed
bun run build
npm publish

prepack already runs the seed build and TypeScript build, so a regular npm publish includes dist/ and seed/.

Release automation

This repository includes GitHub Actions for CI and automated releases:

  • CI: runs format check, lint, test, and build on pushes and pull requests
  • Release Please: watches commits on master, opens or updates a release PR, and automatically bumps the version once the release PR is merged
  • npm publish runs automatically after a release is created

Required GitHub secret:

  • NPM_TOKEN: npm automation token with publish permission for modelinfo

Recommended commit style:

  • feat: add provider aliases
  • fix: handle missing cache version
  • chore: update dependencies

About

A CLI to query AI model capabilities, context limits, and pricing from PublicProviderConf.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors