Skip to content

AI Refactor Exercise

This assignment introduces how to use GPT models to improve existing code. You will practice designing prompts, iterating on them, and capturing results using the prompt testing harness from earlier assignments.

Objectives

  1. Document each prompt attempt and note what worked or didn't.
  2. Use GPT to comment or optimize a piece of legacy code from Module 01.
  3. Optionally generate docstrings or unit tests with an LLM.

Setup

  1. Reuse your Python environment from the prompt testing harness or create a new one.
  2. Install dependencies:
    pip install openai python-dotenv
    
  3. Choose a short script or function from your Module 01 exercises (e.g., FizzBuzz or part of the CRUD project).
  4. Create a ai_refactor_log.md file to record prompts and outputs.

Instructions

  1. Initial Review – run the original code and note any issues or style problems.
  2. First Prompt – ask GPT to refactor or comment the code. Use the CLI harness to send the prompt and capture the response.
  3. Iterate – refine your prompt at least twice to improve the output. Record each version and its results in ai_refactor_log.md.
  4. Apply Changes – edit your code with the best suggestions. Include an example where GPT adds meaningful comments or simplifies logic.
  5. Optional – prompt GPT to generate docstrings or simple pytest tests. Document how well these suggestions worked.

Deliverables

  1. Updated code file with GPT-assisted improvements.
  2. ai_refactor_log.md summarizing each prompt and the resulting code snippets.
  3. (Optional) Any generated docstrings or tests.

Example Prompt Log

Below is a short sample of what your ai_refactor_log.md might contain. Each run captures the prompt, a snippet of the model response, and quick notes on the result.

Run 1 – Add comments

Prompt

Add comments explaining the logic in this FizzBuzz function.

Response excerpt

def fizz_buzz(n: int) -> list[str]:
    # Iterate from 1 to n inclusive
    result = []
    for i in range(1, n + 1):
        text = ""
        if i % 3 == 0:
            text += "Fizz"
        if i % 5 == 0:
            text += "Buzz"
        result.append(text or str(i))
    return result

Notes – The function remained the same but now includes inline comments.

Run 2 – Simplify logic

Prompt

Refactor the FizzBuzz function for readability.

Response excerpt

def fizz_buzz(n: int) -> list[str]:
    result = []
    for i in range(1, n + 1):
        value = (
            ("Fizz" if i % 3 == 0 else "") +
            ("Buzz" if i % 5 == 0 else "")
        )
        result.append(value or str(i))
    return result

Notes – This version uses string concatenation to reduce branching.

Run 3 – Generate tests

Prompt

Write pytest cases to verify the fizz_buzz output.

Response excerpt

def test_fizz_buzz_basic():
    assert fizz_buzz(5) == ["1", "2", "Fizz", "4", "Buzz"]

Notes – The generated tests ran successfully once added to a test file.

Submission

Push your code and log file to the repository. In your pull request description, explain which prompts were most effective and any limitations you encountered.