Mastering OpenAI API: Complete Guide on Accessing and Using its Potential

Assume you’re a busy developer with a penchant for procrastination. You have a looming deadline for a project, and you haven’t started writing the documentation yet. Panic sets in as you realise, you’re running out of time. What do you do?

Here’s a solution for you,

OpenAI API can save the day! You can write a simple Python script to generate documentation for your project automatically. Let’s first get to know a little about OpenAI API.

The OpenAI API is a powerful tool that provides developers with access to OpenAI’s state-of-the-art natural language processing (NLP) models. These models are designed to understand and generate human-like text, making them valuable for a wide range of applications.

What is OpenAI?

OpenAI is a prominent AI research organization founded in 2015. Its primary goal is to develop advanced artificial intelligence technologies and ensure their equitable use for the benefit of society. OpenAI has created powerful language models like GPT-3 and GPT-4, which excel in natural language understanding and generation. It offers APIs to integrate these models into various applications, making AI accessible to developers and businesses.

What is API and How does it work?

To make it easier to comprehend, let’s try to explain this using a simple analogy. Assume an API is like a menu in a restaurant. It lists the dishes you can order, along with a description of each dish. When you specify what you’d like to order, the kitchen (i.e., the system) prepares the dish and serves it to you. In this analogy, the menu is the API that tells you what options you have and how to order, and the kitchen is the system that prepares your request and sends it back to you. This way, different parts of a system can talk to each other even if they’re made by different people or companies, just like restaurants can serve dishes from different chefs.

How does OpenAI API work?

The OpenAI API is a platform that allows developers to access and leverage OpenAI’s advanced language models, such as GPT-3, for various natural language processing tasks. Users send a prompt or input text to the API, and the model processes it to generate human-like text outputs based on the given context. The API makes it simple to integrate powerful language understanding and generation capabilities into applications, chatbots, websites, and more. OpenAI’s models are continually improved through pre-training on vast text datasets and fine-tuning for specific tasks, providing developers with a versatile tool for a wide range of language-related applications while maintaining control and safety measures. Check out our other blogs to get a more detailed understanding of the workings of language models.

Let’s get Started

To get started, you’ll need to create an account on the OpenAI platform and obtain API keys. In this tutorial, we’ll walk you through the process of creating an account, generating API keys, and using them with Python to access OpenAI’s language models.

Prerequisites

Before you begin, make sure you have the following:

  1. Python installed on your computer (version 3.6 or higher recommended).
  2. An internet connection.
  3. An OpenAI account (if you don’t have one, follow the following steps).

Step 1: Sign Up for an OpenAI Account

If you haven’t already, go to the OpenAI website and sign up for an account. You will need to provide your email address and create a password. Follow the steps below to easily create an OpenAI account.

Visit the OpenAI’s Platform website:

Please use the provided website link to access the webpage.

https://platform.openai.com/overview

Upon visiting the website shown in Fig 1., you will encounter the option to log in or sign up, and your choice will depend on whether you have an account or not.

You can either register an account by providing your email address or access the platform by logging in with your OpenAI account details.

Step 2: Generate API Keys

Now that we’ve successfully created an OpenAI account, let’s proceed to generate an API key. To simplify the process, this blog includes visual aids to guide you through each step. Please follow the instructions below.

As illustrated in Fig 2, after logging in, navigate to the top-right corner of the page and click on your profile icon. From the dropdown menu that appears, select “View API Keys.”

Upon reaching the page displayed in Fig 3, locate and click the “Create new secret key” option to generate a fresh API Key.

An API Key will be presented on the screen; it’s crucial to save this key because losing it means you won’t have access to it anymore.

Step 3: Billing Details

OpenAI’s API typically mandates the inclusion of billing details, regardless of whether you’re using their free tier or a subscription plan. Billing information serves multiple crucial purposes, including user authentication, usage tracking to enforce rate limits, and accurate billing for exceeding free-tier usage or accessing paid subscription features. To initiate API requests, you must establish an account with billing information and obtain an API key. It’s worth noting that the absence of billing information will grant you login access, but you won’t be able to make additional API requests.

Navigate to the “Billing” section within your organization settings and select “Start payment plan” as shown in Fig 4. Here, you’ll provide your credit card information. It’s worth highlighting that even after adding your card details, you can continue using the free tier by setting request limits for your API usage.

Note:

To explore and experiment with the API, new users receive $5 worth of free tokens, which have a three-month expiration period. Once your token quota is exhausted, you have the option to provide billing details for an upgrade to a pay-as-you-go plan, allowing continued API usage. Failure to input billing information will grant login access but restrict any additional API requests.

Step 4: Install Required Python Libraries

To work with the OpenAI API in Python, you’ll need to install the `openai` Python library. You can do this using `pip`:

pip install openai

With all the necessary preparations in place, let’s dive straight into how you can harness the power of OpenAI in Python to simplify your tasks.

Step 5: Using Your OpenAI API Key with Python

Now that you have your API key and the necessary Python library installed, you can start using OpenAI’s language models in your Python code.

Step 5.1. Initializing the OpenAI Client

To use OpenAI’s Python library, you first need to initialize the OpenAI client with your API key. This key acts as your authentication to access the API. Here’s how you do it:

Firstly import all necessary libraries that you require.

import openai

Replace the OpenAI API Key that you generated in the place of ‘YOUR_API_KEY’

api_key = ‘YOUR_API_KEY’

Initialize the OpenAI client with your API key

openai.api_key = api_key

This is done so as to establish a connection between your Python script and OpenAI’s servers, allowing you to send requests and receive responses.

Step 5.2. Figuring which type of Model is suitable.

There are many methods to facilitate interactions with OpenAI API, we call them helper functions. These functions are designed to make it easier for developers to send requests, process responses, and manage API usage.

1.openai.ChatCompletion.create()
2.openai.Completion.create()

Look at the following table to understand the clear distinction between the two:

 ChatCompletion EndpointCompletion Endpoint
PromptTakes a series of prompts as inputsTakes only a single prompt as input
ModelsNew Models i.e., gpt-3.5-turbo, gpt-4Legacy Models i.e., text-davinci-003, text-davinci-002, davinci, curie, babbage, ada
Usage– chatgpt style chatbots – Virtual assistance for customer service – Interactive surveys and forms– Write a Email – Revise a message – text generation
PricingCheaper than Completion as of Sept 2023. 

Now that we have understood the difference between these two, lets dive into how to use them in different scenarios.

Step 5.3. Completions API Endpoint Usage in Python

Okay, we have already understood that to work with the OpenAI API key proper prompts are required to generate further text. The Completion API typically take a single prompt as input, and it returns a dictionary with various properties, including the generated text.

Here’s a simple example of translating a text.

INPUT:

response = openai.Completion.create(     engine=”text-davinci-003″,     prompt=”Translate the following English text to Hindi: ‘I want to order a basketball’”, )   print(response.choices[0].text.strip())

Explanation:

  1. `openai.Completion.create()`: This is a method provided by the `openai` Python library that sends a request to the OpenAI API for text generation. It allows you to specify various parameters for the request, such as the model (engine), the prompt, and other options.
  2. `engine=”text-davinci-003″`: This parameter specifies the language model you want to use. In this case, you are using the `text-davinci-003` engine, which is one of OpenAI’s models designed for text generation.
  3. `prompt=”Translate the following English text to French: ‘Hello, how are you’”`: This parameter provides the input prompt to the model. It instructs the model to perform a translation task, translating the provided English text “Hello, how are you” into French.

OUTPUT:

मुझे एक बास्केटबॉल ऑर्डर करना है

Please be aware that the syntax described here is not the sole method for interacting with the Completion API. You have the flexibility to adjust the number of tokens and explore additional options as necessary. Okay now let’s move on.

Step 5.4. ChatCompletions API Endpoint Usage in Python

For ChatCompletion API, you typically provide a list of messages in a conversation, where each message has a “role” (“user” or “assistant”) and “content” (the text of the message). The response from chat models is also a dictionary with message properties, including the assistant’s reply.

INPUT:

word = ‘Silk’   prompt = f”’ Meaning: Give the meaning Antonym: Give the antonym   “`{word}“` ”’
response = openai.ChatCompletion.create(         model=”gpt-3.5-turbo”,         messages = [{“role”: “user”, “content”: prompt}], )   print(response.choices[0].message[“content”])

Explanation:

  1. `openai.ChatCompletion.create()`: This method sends a request to the OpenAI ChatGPT API to generate responses in a chat-like conversation format. It allows you to specify parameters like the model (in this case, “gpt-3.5-turbo”) and the messages exchanged in the conversation.
  2. `model=”gpt-3.5-turbo”`: This parameter specifies the specific model you want to use for the conversation. “gpt-3.5-turbo” is one of OpenAI’s models optimized for chat-like interactions, offering a balance between capability and cost.
  3. `messages`: This parameter is a list of message objects that represent the conversation between the user and the model. Each message has two properties: “role” and “content.” “Role” can be “user” or “assistant,” and “content” contains the text of the message.

OUTPUT:

Meaning: a fine, soft, and lustrous fiber produced by certain insects, especially the silkworm, for weaving into cocoons and textiles   Antonym: Rough, coarse, or synthetic fabrics such as polyester or nylon.

In this example, we’ve demonstrated the usage of the ChatCompletion API with a single message. However, you have the option to work with multiple messages and leverage various additional parameters to customize your interactions. Let’s explore these possibilities in more detail.

Use Case

Now that we have grasped the concepts of when and where to employ OpenAI API Keys, let’s put our knowledge into action by solving real-world problems with the assistance of OpenAI models.

We’ll build an interactive chatbot using the OpenAI API.

Start by importing the required libraries and packages. When working with your OpenAI API key, it is considered a best practice to securely store and access it using environment variables.

import os
import openai
 
openai.api_key = os.environ[“OPENAI_API_KEY”]

As previously established, for an interactive chatbot, we’ll employ the ChatCompletion API and utilize the `gpt-3.5-turbo` model. The Python functions shown below use the OpenAI API to generate text based on input messages.

def get_completion(prompt, model=”gpt-3.5-turbo”,temperature=0):     messages = [{“role”: “user”, “content”: prompt}]     response = openai.ChatCompletion.create(         model=model,         messages=messages,         temperature=0, # this is the degree of randomness of the model’s output     )     return response.choices[0].message[“content”]   def get_completion_from_messages(messages, model=”gpt-3.5-turbo”, temperature=0):     response = openai.ChatCompletion.create(         model=model,         messages=messages,         temperature=temperature, # this is the degree of randomness of the model’s output     )     print(str(response.choices[0].message))     return response.choices[0].message[“content”]

The first function takes a single prompt and returns the generated content, while the second function accepts a list of messages for more extended conversations. Both functions allow specifying the model and the temperature parameter for controlling output randomness.

Given that we’re developing a chatbot, it’s crucial to provide the model with clear instructions on how it should engage in conversations with users. Hence the following messages are provided.

messages = [  {‘role’:’system’, ‘content’:’You are friendly chatbot.’},    {‘role’:’user’, ‘content’:’Hi, my name is Lily’}  ]

After completing the setup, we’ll invoke the function to generate an appropriate response.

response = get_completion_from_messages(messages, temperature=1) print(response)

OUTPUT

{   “content”: “Hello Lily! It is nice to meet you. How can I assist you today?”,   “role”: “assistant” } Hello Isa! It is nice to meet you. How can I assist you today?


The chatbot is now fully functional. You can input prompts, and it will engage in conversations with you just like a typical chatbot. For the chatbot to provide accurate answers, it’s essential to include additional messages that instruct it on its behaviour and functionality.

Note:

Please note that all the examples provided here have been using the free tier offered by OpenAI with a certain usage limit. If you intend to use these API keys for more advanced tasks, it is essential to review the billing details.

Best Practices while using API Keys

Using OpenAI API keys effectively and responsibly is essential to ensure a positive and productive experience. Here are some best practices to keep in mind when working with

OpenAI API keys:

  1. Use Environment Variables in place of your API key: Treat your API key like a password. Keep it confidential and avoid sharing it publicly or in public repositories. If you need to share your code with others, consider using environment variables to store and access your API key securely.
  2. Review and Follow OpenAI’s Policies: Familiarize yourself with OpenAI’s terms of service, usage policies, and guidelines. Ensure that your use of the API aligns with OpenAI’s ethical and usage standards. Please be aware that OpenAI’s policies are subject to continuous updates. The details provided in this blog are valid as of Sept 2023.
  3. Start with a Sandbox Environment: If you’re new to the API or testing, consider using OpenAI’s sandbox environment. It allows you to experiment without incurring additional costs.

Common Use Cases:

Here are some common use cases for the OpenAI API:

  1. Content Generation: Automatically generate articles, blog posts, product descriptions, or creative writing.
    1. Chatbots: Build conversational agents that can engage with users in natural language.
    1. Translation: Translate text between languages.
    1. Summarization: Generate concise summaries of long articles or documents.
    1. Question-Answering: Create systems that can answer questions based on textual information.
    1. Text Completion: Assist users in composing emails, code, or other written content.
    1. Language Understanding: Extract insights, sentiments, or entities from text data.
    1. Language Tutoring: Provide language learning assistance, including grammar and vocabulary explanations.

Points to remember while using OpenAI API Key:

  1. API Key Security: Keep your API key secure and confidential. It grants access to OpenAI’s services and should not be shared publicly or stored in insecure locations.
  2. Usage Limits: OpenAI may impose usage limits on your API key based on your subscription plan or organization’s agreement. Ensure that you are aware of these limits to avoid unexpected interruptions in service.
  3. Rate Limiting: OpenAI may enforce rate limiting on your API key to prevent abuse. Be mindful of the rate limits to avoid receiving HTTP 429 Too Many Requests responses.  This means that you can only make a certain number of requests per second.
  4. Token Limit: The API has a maximum token limit for input and output. If your prompt or generated text exceeds this limit, you may need to truncate or adjust your content.
  5. Billing: While there may be free or limited usage options, extensive usage of the API can incur costs. You should be aware of the pricing structure and monitor your usage to avoid unexpected charges.
  6. API Version: Keep track of the API version you are using. OpenAI may release updates and new versions of the API, and it’s essential to ensure compatibility with your code.
  7. Data Handling: If you are processing sensitive or personal data using the API, ensure compliance with data privacy regulations and consider data encryption and security practices.

Limitations:

  1. Accuracy and Bias: The output generated by AI models can sometimes contain inaccuracies or biases. It’s important to review and moderate the generated content, especially for sensitive or critical applications.
  2. Data Privacy: When using the API, you may need to handle user data or sensitive information responsibly. Ensure compliance with data privacy regulations and implement security measures as needed.
  3. Model Evolution: AI models, including those from OpenAI, are continually evolving. This means that the behaviour, capabilities, and results of models may change over time.
  4. Policy Compliance: Ensure that your usage of the API aligns with OpenAI’s terms of service and usage policies to avoid potential legal issues.

Conclusion

Congratulations! You’ve accomplished the entire process, from setting up your OpenAI account to generating API keys and leveraging them to interact with OpenAI’s language models using Python. With these capabilities at your disposal, you’re well-equipped to integrate advanced natural language processing into your applications and projects.

Feel encouraged to delve deeper into the extensive capabilities of the OpenAI API. There’s a world of possibilities awaiting your exploration. In this blog, we’ve embarked on a journey to harness the power of AI for various applications, but there’s so much more to discover and create. As you venture forward, keep innovating and pushing the boundaries of what AI can achieve.

Reference