Using LlamaIndex and gpt-3.5-turbo (ChatGPT API) with Azure OpenAI Service
Table of Contents
In this post we briefly discuss how LlamaIndex 🦙 (GPT Index) and
gpt-35-turbo (the model behind ChatGPT) can be used with Azure OpenAI Service.
If you want a short into to using Azure OpenAI Service with Llama-Index, have a look at this post: posts/using-gpt-index-llamaindex-with-azure-openai-service/
First, create a
.env and add your Azure OpenAI Service details:
Next, make sure that you have
text-embedding-ada-002 deployed and used the same name as the model itself for the deployment.
Let’s install/upgrade to the latest versions of
pip install openai --upgrade pip install langchain --upgrade pip install llama-index --upgrade
As of writing this, there seems to be a bug where
openai_api_version needs to be set to both the LLM and also the embedding model in order for this to work. If you experience any errors regarding the model not being found, using
openai.log = "debug" is helpful to troubleshoot where calls are failing. In my case, it was because the api-version parameter of the API request was not properly set to the latest version.
In this blog post, we discussed how we can use the ChatGPT API (
gpt-35-turbo model) with Azure OpenAI Service and Llama-Index.