Changelog
Source:NEWS.md
chatLLM 0.1.4 (Upcoming Release - February 2026)
New Features
AWS Bedrock Integration Added first-class support for
provider = "bedrock"using AWS SigV4 request signing and the Bedrock Converse API. New AWS parameters incall_llm()includeaws_region,aws_access_key_id,aws_secret_access_key, andaws_session_token.Azure OpenAI Integration Added
provider = "azure_openai"with deployment-based endpoint routing andapi-keyauthentication. New Azure OpenAI parameters incall_llm()includeazure_endpointandazure_api_version.Azure AI Foundry Integration Added
provider = "azure_foundry"with support for eitherapi-keyor bearer token authentication. New Foundry parameters incall_llm()includeazure_foundry_endpoint,azure_foundry_api_version, andazure_foundry_token.Model Catalog Expansion
list_models()now supports both Azure providers and Bedrock.list_models("all")includes the expanded provider set.
Improvements
Provider-specific guidance Error messages now provide more precise credential hints for AWS Bedrock, Azure OpenAI, and Azure AI Foundry.
Tests and documentation Added test coverage for Bedrock, Azure OpenAI, and Azure AI Foundry provider paths, and updated generated Rd documentation accordingly.
chatLLM 0.1.3 (Upcoming Release - August 2025)
CRAN release: 2025-08-19
New Features
Google Gemini Integration
chat_llm()now speaks to Gemini via Google’s OpenAI-compatible Chat Completions API (…/v1beta/openai/chat/completions). The default model isgemini-2.0-flash, and new helpers (get_gemini_models(),"gemini"option inlist_models()) make catalog discovery a one-liner. (Google AI for Developers, Google Developers Blog)xAI Grok Integration Added first-class support for Grok through the endpoint
https://api.x.ai/v1/chat/completions. The default model isgrok-3-latest. You also getget_grok_models()plus a"grok"flag inlist_models()for painless switching. (xAI Docs, docs.typingmind.com)Model Catalog Expansion
list_models("all")now aggregates catalogs from eight providers—OpenAI, Groq, Anthropic, DeepSeek, DashScope, GitHub, Gemini, and Grok—so you can inspect every available model in a single call.
chatLLM 0.1.2 (Upcoming Release – May 2025)
CRAN release: 2025-05-20
New Features
DeepSeek Integration
chat_llm()now supports DeepSeek as a backend provider. This expands the range of available language models and increases flexibility for users selecting different inference engines.Alibaba DashScope Integration
You can now use models from Alibaba Cloud’s Model Studio (DashScope) via OpenAI-compatible endpoints. This allows users in mainland China and beyond to easily integrate powerful Qwen-series models (likeqwen-plus,qwen-turbo, and others) using the samechat_llm()interface.GitHub Copilot-Compatible Model Integration
You can now use models hosted through GitHub Copilot-compatible endpoints. This allows seamless integration with custom-hosted or proxy-accessible models, making it easier to experiment with private or specialized deployments.Model Catalog Access
chat_llm()now supports listing all available models across all supported providers. This makes it easier to discover and compare model options before selecting one for your workflow.