What's New in GPTScript v0.3

Apr 4, 2024 by Craig Jellick
What's New in GPTScript v0.3

We're excited to announce the release of GPTScript v0.3. This release brings support for using GPTScript with the ever growig ecosystem of propietary and open source models available.

As a bit of background, we initially released GPTScript with just support for OpenAI models because they were hands-down the best, particularly at function calling, which is at the heart of GPTScript. We're releasing this feature now because the LLM ecosystem is rapidly evolving and models are improving every day. Moreover, OpenAI's API has emerged as the defacto standard for model APIs, particuarly amongst open source models. This makes swapping them in particularly easy.

With that said, there are two ways leverage your model of choice in GPTScript: via the model's native OpenAI-compatible API or through a provider-specific shim. Documentation for this can be found here, but we'll dive into more details of each below.

OpenAI compatible APIs

Integrating with these models has two steps:

  1. Configure your API key
  2. Reference the model in your gptscript

For OpenAI compatible providers, GPTScript will look for an API key to be configured with the prefix GPTSCRIPT_PROVIDER_, the API endpoint's domain converted to environment variable format, and a suffix of _API_KEY. For example, if you are using mistral-large-latest from https://api.mistral.ai/v1, the environment variable would be GPTSCRIPT_PROVIDER_API_MISTRAL_AI_API_KEY.

Once you've set that, you can see what models are available via the

--list-models
flag, like so:

$ gptscript --list-models https://api.mistral.ai/v1 mistral-embed from https://api.mistral.ai/v1 mistral-large-2402 from https://api.mistral.ai/v1 mistral-large-latest from https://api.mistral.ai/v1 mistral-medium from https://api.mistral.ai/v1 mistral-medium-2312 from https://api.mistral.ai/v1 mistral-medium-latest from https://api.mistral.ai/v1 mistral-small from https://api.mistral.ai/v1 mistral-small-2312 from https://api.mistral.ai/v1 mistral-small-2402 from https://api.mistral.ai/v1 mistral-small-latest from https://api.mistral.ai/v1 mistral-tiny from https://api.mistral.ai/v1 mistral-tiny-2312 from https://api.mistral.ai/v1 open-mistral-7b from https://api.mistral.ai/v1 open-mixtral-8x7b from https://api.mistral.ai/v1

With that information in hand, you can set the Model stanza in your script. Here's an example:

Model: mistral-large-latest from https://api.mistral.ai/v1 Description: Returns back the input of the script Args: input: Any string echo "${input}"

This paradigm works for local OSS models as well. For example, if you're running a model via llama.cpp, you can easily see your models available via:

gptscript --list-models http://127.0.0.1:8019/v1

and set them in your scripts in the same way.

Provider specific shims

Many proprietary models from large vendors often don't support the OpenAI API. For such models, we've introduced provider-specific shims that will translate GPTScript's OpenAI API requests to requests that those models understands. We currently have support for Mistral on Azure, Claude on Anthropic, and Gemini on Google.

To use one of these providers, you use the same "Model" stanza, but instead of pointing at an API endpoint, you point to the shim, just as you would reference a tool. Here's an example of using Claude 3 from Anthropic:

model: claude-3-haiku-20240307 from github.com/gptscript-ai/anthropic-provider Say hello world

Note that each of these providers have specific configuraiton requirements. So, visit the links above for more details.

Things to keep in mind

As you're experimenting with our multi-model support, there's a handful of things you should keep in mind:

First, not all models are created equal. Even if two models are of similar quality, they will likely respond to the same prompt very differently. As such, prompts (and therefore GPTScripts and tools) are not portable across models and providers.

Second, if your script has multiple tool definitions, each tool must specify it's model. If a tool doesn't specify a model, the default (OpenAI) model will be used. If your script imports a tool, say one discovered on https://tools.gptscript.ai, it will use the model configured in that tool's gpt file. It won't "inherit" the model you've specified. This is because of the previous point: prompts aren't portable.

Third, just in case you're wondering, the OpenAI APIs that we use are chat completions and list models endpoints. GPTScript doesn't use the higher level APIs like assistants and threads.

Finally, you can mix and match! The best proprietary models are also likely to be the most expensive. So, you can save time and money if you can get a specific tool working well on a cheaper or faster model. The important thing is to experiment and see what's best for your usecase.

Wrapping Up

That does it for this overview of the v0.3 release and multi-model feature of GPTScript. You can see here for the full release notes and join us on Discord if you have any questions or feedback.

You can also watch Acorn co-founders Darren Shepherd and Shannon Williams demonstrate the new multi-model support in their community update from last week. This week we'll be hosting our next community live stream at 9:00 am US Pacific Time on Friday April 5th.