Boosting your Python scripts with ChatGPT API
Foto de Emile Perron en Unsplash
As every curious person is doing nowadays, I started using ChatGPT online for fun. I was writing dummy prompts at first when my laziness showed up: Can I really delegate some part of my Python development job to ChatGPT? I have seen a bunch of TikToks claiming they do it, so I also tried it myself.
In any case, I was sure that to use it, I would need to fully integrate it into my Python scripts, so some API calls — or even writing a module providing them — would do the job. I was so lazy that I asked ChatGPT to do it for me.
To my surprise, ChatGPT gave me the wrong instructions!
I logged in and asked ChatGPT straight away:
Asking ChatGPT to provide me with a Python script that calls its API | Screenshot from ChatGPT web interface.
And it outputs the following script:
And after a very accurate description of the script, it pointed out the following:
ChatGPT suggesting how to install OpenAI packages | Screenshot from ChatGPT web interface.
SPOILER: openai_secret_manager package does not work out of the box. It is not an official OpenAI package either.
I assumed ChatGPT knew pip would replace underscores with hyphens, so I immediately installed the packages. It successfully installed the openai package, but not the openai_secret_manager (nor openai-secret-manager). Again, I lazily forwarded the error to ChatGPT, and it suggested another approach:
ChatGPT giving an installation alternative for the openai_secret_manager package | Screenshot from ChatGPT web interface.
But 404, the repository url didn’t exist.
So we started a long discussion about how to install the secret manager package until my frustration exceeded my laziness.
Going straight to the point: I decided to implement a very simple secret manager module myself to then be able to call ChatGPT from my scripts.
I started by creating a Python file named secret_manager.py in the project directory and defining a get_secret() function that gets the authentication token from the environment or a file. Here’s the implementation:
There is a catch for the FileNotFoundError exception in case the token file does not exist. This will prevent the program from crashing with an error message that doesn’t clarify what went wrong.
In addition, I like adding a default value for the key_name argument, so that if you call the function without any arguments, it doesn’t raise a TypeError.
Since I was already warmed up, I went straight to the point and wrote a simple script that sends the desired prompt to ChatGPT and gets its answer back by using the structure that it gave me at first but passing the prompt as an input parameter:
Parameters for the text generation, including the language model to use (text-davinci-002), the temperature (0.5), and the maximum number of tokens in the generated text (100) are set according to ChatGPT response, which I believe correspond to optimal values.
The openai.Completion.create() method generates text using the OpenAI API. This method takes the aforementioned parameters and the desired prompt and returns a response object containing the generated text. Finally, the generated text can be accessed by taking the text property of the first choice in choices list of the response object.
I went forward and asked for my API keys and saved the token into a file. To get your OpenAI API authentication keys, you must create a user account at the official OpenAI website and access the API Keys section through the web interface itself.
Note: It is a good practice to store the token in a file starting with ., so that it is not visible when listing your directory content and the key file is transparent to the user, and no .txt extension!
And now, it is time to try it with a simple prompt:
Testing the Python script with a random prompt | Self-made content.
Without judging the quality of the poem… it works! Great!
Now this script can be easily integrated into my development projects (and into yours!)
For more information on the model parameters, you can now ask ChatGPT: –prompt “Can you explain me how to select the parameters for the text generation?” .
As a final comment, I would like to note that the whole secret manager issue could also be skipped by taking the secret from the environment directly on the main script, but there are cases where we need to take tokens from files and, of course, we are curious people in here, aren’t we?