Using the OpenAI API in Python¶
The OpenAI API gives you access to state-of-the-art models like GPT for text and Whisper for speech. This guide walks through authentication, key endpoints, and helpful utilities for robust integration in Python.
1. Setup: Authentication and Environment¶
Step 1: Install required libraries¶
pip install openai python-dotenv
Step 2: Store your API key securely¶
Create a .env file in your project root:
OPENAI_API_KEY=sk-...
Step 3: Load the key in your Python script¶
from dotenv import load_dotenv
import os
import openai
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
β Use
.envto keep credentials out of source control.
2. Basic Text Completion¶
Use the Completion endpoint to generate or extend text from a prompt:
response = openai.Completion.create(
model="text-davinci-003",
prompt="Explain the benefits of unit testing",
max_tokens=50,
)
print(response["choices"][0]["text"].strip())
π
text-davinci-003excels at classic single-turn completions.
3. Chat-Based Conversations¶
For multi-turn interactions, use ChatCompletion.create() with role-based messaging:
chat = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Summarize agile development."},
],
)
print(chat["choices"][0]["message"]["content"].strip())
π§ Use
systemmessages to guide the assistantβs behavior.
4. Audio Transcription with Whisper¶
Transcribe speech to text using the Audio.transcribe() method:
with open("meeting.wav", "rb") as audio_file:
transcript = openai.Audio.transcribe("whisper-1", audio_file)
print(transcript["text"])
ποΈ Supports WAV, MP3, MP4, M4A, and more.
5. Utility Functions for Reliability and Insight¶
These helpers improve reliability and give insight into model usage:
import time
import tiktoken
encoding = tiktoken.encoding_for_model("gpt-3.5-turbo")
# Count tokens in a string
def count_tokens(text: str) -> int:
return len(encoding.encode(text))
# Retry API calls with exponential backoff
def retry(func, *args, max_retries=3, **kwargs):
for i in range(max_retries):
try:
return func(*args, **kwargs)
except openai.error.OpenAIError:
time.sleep(2 ** i)
raise
# Extract message text from chat response
def format_chat(reply) -> str:
return reply["choices"][0]["message"]["content"].strip()
π Token counting helps manage limits and costs.
6. How It All Connects¶
sequenceDiagram
participant Dev as Your Script
participant API as OpenAI API
Dev->>API: Request with API key
API-->>Dev: JSON response
Dev->>Dev: Parse & display result
Summary¶
With just a few lines of setup, you can:
- Generate and continue text
- Run multi-turn conversations
- Transcribe audio
- Count tokens and handle retries
For a handy overview of parameters and model limits, see the OpenAI API Cheat Sheet and the OpenAI API Quick Reference.
Additional Resources¶
- API Request/Response Examples β sample logs of typical HTTP interactions useful when testing your own integrations.
Always keep your API key secure, handle failures gracefully, and monitor your usage to stay within limits.
Ready to go deeper? Try building a CLI assistant or auto-summarizer next.
Follow-Up Exercises¶
Put what you've learned into practice with these standalone projects:
- Prompt Test Harness β design a CLI tool for rapid prompt iteration and logging.
- API Test Harness β create a reusable command-line utility for exercising REST endpoints.
By learning to authenticate requests, handle responses, and manage errors, you can integrate LLMs into your applications. Building on Lesson 02 β Prompt Engineering's reminder that clear instructions shape reliable outputs, Lesson 04 β Understanding response_format={'type': 'json_object'} in LLM APIs will show how to enforce structured JSON replies for downstream processing.