Skip to main content

Overview

This changelog tracks all notable changes to MCPJam Inspector. We follow Semantic Versioning and keep our changelog in the spirit of Keep a Changelog.

Latest Releases

Recent Updates

Added

  • GPT-5 Model Support
    • Added support for GPT-5 model variants: gpt-5, gpt-5-mini, gpt-5-nano, gpt-5-chat-latest, gpt-5-pro, gpt-5-codex
    • Organization verification notice for GPT-5 access

Improved

  • Error Handling
    • Enhanced streaming error handling in chat interface
    • Errors now display as inline alerts instead of failing silently
    • Better error messages from AI providers

Version 1.0.0

Added

  • MCP Server Connection Management
    • Support for STDIO, SSE, and Streamable HTTP transports
    • Multi-server connection support
    • Real-time connection status monitoring
  • LLM Playground
    • OpenAI integration (GPT-3.5/4)
    • Anthropic Claude integration (Claude 2/3)
    • DeepSeek AI support (DeepSeek R1)
    • Ollama local model compatibility
    • Interactive chat interface with streaming responses
  • Tools & Resources Testing
    • Tool execution and validation
    • Resource schema verification
    • Prompt testing interface
    • Real-time parameter validation
  • OAuth 2.0 Testing
    • Guided OAuth flow setup
    • Token management
    • Scope verification
    • Refresh token handling
  • MCP Evals
    • Automated compliance testing
    • Custom evaluation framework
    • Test result reporting
    • Performance benchmarking
  • Developer Tools
    • Comprehensive logging system
    • Request/response tracing
    • Error reporting and analysis
    • Performance monitoring

Improved

  • Enhanced UI/UX with modern design
  • Better error messages and debugging support
  • Optimized performance for large MCP responses
  • Improved documentation and examples

Fixed

  • Various bug fixes and stability improvements
  • Memory leak fixes in long-running sessions
  • Edge cases in MCP protocol handling

Coming Soon

We’re actively working on new features and improvements:
  • Enhanced evaluation templates
  • More LLM provider integrations
  • Advanced debugging tools
  • Performance analytics dashboard
  • CI/CD integration support

Stay Updated

Reporting Issues

Found a bug or have a feature request? Please open an issue on GitHub.
I