AI Refactor Exercise¶
This assignment introduces how to use GPT models to improve existing code. You will practice designing prompts, iterating on them, and capturing results using the prompt testing harness from earlier assignments.
Objectives¶
- Document each prompt attempt and note what worked or didn't.
- Use GPT to comment or optimize a piece of legacy code from Module 01.
- Optionally generate docstrings or unit tests with an LLM.
Setup¶
- Reuse your Python environment from the prompt testing harness or create a new one.
- Install dependencies:
pip install openai python-dotenv - Choose a short script or function from your Module 01 exercises (e.g., FizzBuzz or part of the CRUD project).
- Create a
ai_refactor_log.mdfile to record prompts and outputs.
Instructions¶
- Initial Review – run the original code and note any issues or style problems.
- First Prompt – ask GPT to refactor or comment the code. Use the CLI harness to send the prompt and capture the response.
- Iterate – refine your prompt at least twice to improve the output. Record each version and its results in
ai_refactor_log.md. - Apply Changes – edit your code with the best suggestions. Include an example where GPT adds meaningful comments or simplifies logic.
- Optional – prompt GPT to generate docstrings or simple
pytesttests. Document how well these suggestions worked.
Deliverables¶
- Updated code file with GPT-assisted improvements.
ai_refactor_log.mdsummarizing each prompt and the resulting code snippets.- (Optional) Any generated docstrings or tests.
Example Prompt Log¶
Below is a short sample of what your ai_refactor_log.md might contain. Each run
captures the prompt, a snippet of the model response, and quick notes on the
result.
Run 1 – Add comments¶
Prompt
Add comments explaining the logic in this FizzBuzz function.
Response excerpt
def fizz_buzz(n: int) -> list[str]:
# Iterate from 1 to n inclusive
result = []
for i in range(1, n + 1):
text = ""
if i % 3 == 0:
text += "Fizz"
if i % 5 == 0:
text += "Buzz"
result.append(text or str(i))
return result
Notes – The function remained the same but now includes inline comments.
Run 2 – Simplify logic¶
Prompt
Refactor the FizzBuzz function for readability.
Response excerpt
def fizz_buzz(n: int) -> list[str]:
result = []
for i in range(1, n + 1):
value = (
("Fizz" if i % 3 == 0 else "") +
("Buzz" if i % 5 == 0 else "")
)
result.append(value or str(i))
return result
Notes – This version uses string concatenation to reduce branching.
Run 3 – Generate tests¶
Prompt
Write pytest cases to verify the fizz_buzz output.
Response excerpt
def test_fizz_buzz_basic():
assert fizz_buzz(5) == ["1", "2", "Fizz", "4", "Buzz"]
Notes – The generated tests ran successfully once added to a test file.
Submission¶
Push your code and log file to the repository. In your pull request description, explain which prompts were most effective and any limitations you encountered.