After a long while, ChatGPT released its API, which has excited the tech world to try it out immediately.

Introduction

In this post, I will show the basics of using the ChatGPT API and how you can implement it in Python. This might be just a good starting point for others in order for them to explore more.

Before everything else, you should log into your OpenAI user to generate your API key. Take into consideration that OpenAI doesn’t display your secret API key again.

To start, you will need a key for your API from OpenAI; keep in mind that it is generated once each time. You need to install the OpenAI package for Python; some users use pip. Next, set up your API credential to be able to use ChatGPT API, which involves setting your API key as an environment variable, such as:

import os
import openai

open.api_key = os.environ ["OPEN_API_KEY"]

Using the os and openai packages allows you to set your API key as a variable; this makes sure that the key is not hard-coded.

OpenAI Completion Create Method

Now that you are done setting up your API credentials, you can generate text using the “open.ai.completion.create”. To use this method to generate text, you should follow this:

model_engine = "text-davinci-002" or "gpt-3.5-turbo",
prompt = "ChatGPT API in Python" 
response = openai.Completion.create(
     engine=model_engine,
     prompt=prompt,
     max_tokens50
)
print(response.choices[0].text)

Text-DaVinci-002 is an advanced model that I used, but if you’re used to other model engines, you can incorporate them; we should be providing the prompt, setting the tokens parameter (in this case 50), which limits the length of the generated text to about 50 words. The text generated is highly based on the model engine and different parameters, but it should always be a consistent and grammatically correct response to the input prompt. For example, the generated text was:

"Explore everything you can do, interact and find out endless possibilities." 

You can use several parameters to customize your text generated, for example:

  • Temperature

This controls the creativity of a generated text; the higher the temperature and the more diverse and unexpected the output is, the lower the temperature, the better the result.

  • top_p

This is responsible for the diversity of the result. The higher the value of “top_p” the more diverse the output, whereas the lower the value, the likelier the output.

  • frequency and presence_penalty

These control the frequency of a particular word or phrase that the text generates. Setting the penalty higher will reduce the word frequency, while a lower penalty will increase the word frequency.

model_engine = "gpt-3.5-turbo"
prompt = "How to write an article for beginners" 
response = openai.Completion.create(
    engine=model_engine,
    max_tokens=200
    temperature=1
    top_p=0.5
    frequency_penalty=0.5,
    presence_penalty=0.5,
    stop=["Java", "C++", "JavaScript"]
)
print(response.choices[0].text)

You get the idea. But, for some, this is an easier way to interact with API if you find this one to be difficult. For example,

completion = openai.chatcompletion.create(
     model="gpt-3.5-turbo",
     messages=[
        {"role": "user", "content": "Write about world peace and how we can achieve it."}
     ]
)
print(competion.choices[0].messages.content 

In this case, messages have a dictionary with two keys: roles and content. Whereas the content is the substance of the message, and there are three roles: “user”, “system”, or “assistant”, in this case the user gives instructions.

This is literally as using a prompt in ChatGPT on “write about world peace and how we can achieve it”; it will provide the same answer, but not the same length of the answer.

You can add the system role to set the assistant as its behavior. This assistant will answer any questions that you ask, but it won’t be storing the previous interactions. In order for the assistant to store responses, you have to add “chat_response”. Then, if you interact with the assistant, it will be able to have a responsive conversation with you while remembering the interaction.

Different Text-Generation Methods

As I mentioned earlier, ChatGPT API provides several methods for generating text, such as “openai.Completion.create” which is similar to “openai.Davinci().complete”, or “openai.Completion.create_prompt” which allows you to have a “prompt” object, and can be used as input for “openai.Completion.create” method. For example:

from openai import Davinci
model = Davinci()
 prompt = "5 Tips to article writing" 
 response = model.complete(prompt, max_tokens=100)
 print(response.choices[0].text]

This will give you an idea. I would suggest you check out this video if you’re interested on exploring the ChatGPT 3.5 API to build some simple chat apps with it.

Conclusion

This is an amazing way to create powerful tools and personalized tools that generate text efficiently for your personal needs. Nobody knows what you specifically need, but you; therefore, I recommend you try this out, as you might like it! I hope this helps those who have no idea how to implement the ChatGPT API in Python.

Relevant Articles:
How to Character Limit and Delimit ChatGPT: Article Prompts
OpenAI playgrounds vs ChatGPT: Which One is Better for Use
How to Use Free AI Content Detector: Improve your Writing
Best ChatGPT Prompts to Increase Your Web Traffic: 2023
ChatGPT 2023 Breakdown: What it is and How it Works for Beginners
Share.

Legal Consultant & AI Content Writer Introducing one of our valued contributors, an expert in both legal consultancy and AI-based content creation. Favorite among our audience.

Be the first to write a review

Leave A Reply

Exit mobile version