Prompt Engineering Basics – What It Is and How to Use It

Published: 2025-04-28

Have you ever asked a question to Large Language Models (LLMs) like GPT, Claude, or Gemini, only to receive an unexpected or unsatisfactory response? Sometimes, the answers can be vague, irrelevant, or even misleading, making it difficult to fully harness the true potential of LLMs.

So, how can we obtain more relevant and accurate outputs? The key lies in Prompt Engineering.

In this article, I will introduce what Prompt Engineering is, explore its techniques, and provide insights on how to use LLMs more effectively.


What is Prompt Engineering?

Prompt Engineering is a technique for designing and optimizing prompts to help LLMs generate high-quality and consistent outputs.

While LLMs can handle a wide range of tasks, such as text and image generation, summarization, and translation, they do not always produce satisfactory results. For example, they may return incorrect responses to vague inputs or generate outputs based on superficial patterns without truly understanding the context. To ensure more effective and reliable outputs, it is crucial to provide appropriate contextual information and detailed instructions within the prompts.

Although Prompt Engineering is a relatively new field, numerous techniques have already been developed, and utilized for maximize full potential of LLMs’ ability.


Techniques

In this section, I’ll introduce examples of Prompt Engineering using Python code. I won’t cover the setup of Python environment, so If you’d like to try these examples yourself, please refer to OpenAI documentation.

Before we begin, let’s instantiate the OpenAI client for later use:

from openai import OpenAI
client = OpenAI()

Contextual Prompting

Let’s start with Contextual Prompting – one of the most fundamental yet powerful techniques in prompt engineering. You may already be using it without realizing it. By providing relevant context in your prompts, you can significantly improve the accuracy, relevance, and usefulness of the responses generated by LLMs.

Example

First, here’s an example of an instruction without context. (Note: "in no more than 120 words" is just to keep response short. While it may seem like context, it’s actually an instruction. So feel free to ignore it here):

prompt = "Explain what photosynthesis is in no more than 120 words."

response = client.responses.create(
    model="gpt-4o-mini",
    input=[{"role": "user", "content": prompt}],
    store=False
)

print(response.output[0].content[0].text)

LLM response:

Photosynthesis is the process by which green plants, algae, and some bacteria convert light energy, usually from the sun, into chemical energy in the form of glucose. This occurs mainly in chloroplasts, using chlorophyll, the green pigment that traps light. During photosynthesis, carbon dioxide from the atmosphere and water absorbed by roots react in the presence of sunlight to produce glucose and oxygen. The overall equation can be summarized as:

6 CO2+6 H2O+light energyC6H12O6+6 O2 ext{6 CO}_2 + ext{6 H}_2 ext{O} + ext{light energy} ightarrow ext{C}_6 ext{H}_{12} ext{O}_6 + ext{6 O}_2

This process is crucial for life on Earth, as it provides food for plants and releases oxygen, which is essential for most living organisms.

In this response, we saw technical terms and even a chemical formula. That might be a bit difficult to understand for some readers.

So next, let’s try asking the model to explain it in a way that’s easy for 12-year-old student to understand.

prompt = """
Explain what photosynthesis is in no more than 120 words.
Use clear and simple language that a 12-year-old student can understand. "
Avoid technical jargon unless it's explained. Be concise and informative."
"""

response = client.responses.create(
    model="gpt-4o-mini",
    input=[{"role": "user", "content": prompt}],
    store=False
)

print(response.output[0].content[0].text)

LLM response:

Photosynthesis is the process that plants use to make their own food. It happens mainly in the leaves, where green cells called chloroplasts capture sunlight. Plants take in carbon dioxide from the air and water from the soil. Using sunlight, they combine these ingredients to create glucose, a type of sugar, which serves as food for the plant. As a result of this process, plants also release oxygen, which is important for us to breathe. In short, photosynthesis allows plants to grow and provides oxygen for living beings.

This second response is more polite and uses simpler words. Most notably, it avoids using a chemical formula entirely. As you can see, by providing the right context, you can guide the model to generate answers that better match your purpose.


Specifying Output Format

By default, a model’s response is conversational. However, in some situations – especially when using the output in a program – it can be more useful to receive the response in a structured format like JSON.

You can do this by exlicitly specifying the desired output format in the instructions.

Example

instructions = """
Return the response as a JSON object with this format:
```
{
  "...": [
    "...",
    "...",
    "..."
  ]
}
```
"""

user_prompt = "List three benefits of running."

response = client.responses.create(
    model="gpt-4o-mini",
    input=[{"role": "user", "content": user_prompt}],
    instructions=instructions,
    store=False
)

print(response.output[0].content[0].text)

LLM response:

{
  "benefits": [
    "Improves cardiovascular health",
    "Boosts mental health and mood",
    "Aids in weight management"
  ]
}

Zero-Shot Prompting

Zero-Shot Prompting is a technique where you ask LLMs to perform a task without providing any examples. You simply describe the task in natural language and rely on the model’s pre-learned understanding.

Below are some examples of Zero-Shot Prompting for a sentiment classification task:

Example: Positive

instructions = """
Classify the sentiment of the following text as
Positive, Negative, or Neutral.
"""

user_prompt = """
I really enjoyed the movie. It was fantastic and well-acted.
"""

response = client.responses.create(
    model="gpt-4o-mini",
    input=[{"role": "user", "content": user_prompt}],
    instructions=instructions,
    store=False
)

print(response.output[0].content[0].text)

LLM response:

Positive


Example: Neutral

user_prompt = """
The movie was okay.
Some parts were interesting, but others felt slow.
"""

LLM resonse:

The sentiment of the text is Neutral.


Example: Negative

user_prompt = """
I didn’t enjoy the movie.
The story was confusing and dragged on.
"""

LLM response:

Negative


Even without examples, the model performs well because it’s been trained on a wide variety of similar tasks. This makes Zero-Shot Prompting a quick and flexible tool – especially for common use cases.


Few-Shot Prompting

Few-Shot Prompting is a technique where you ask LLMs to perform a task by providing it with a few examples within the prompt. These examples serve as demonstrations that help the model understand what kind of output is expected. This approach is also referred to as in-context learning, because the model learns the task from the examples provided in the context – without needing to update its underlying parameters.

When only one example is given, it’s often called One-Shot Prompting.

Example

instructions = """
Classify the given animal into one of the following categories:
- Mammal
- Bird
- Reptile
- Fish
- Insect

Examples:
Animal: "Elephant"
Category: Mammal

Animal: "Eagle"
Category: Bird

Animal: "Crocodile"
Category: Reptile

Animal: "Salmon"
Category: Fish

Animal: "Butterfly"
Category: Insect

Now classify the following animal:
"""

user_prompt = "Penguin"

response = client.responses.create(
    model="gpt-4o-mini",
    input=[{"role": "user", "content": user_prompt}],
    instructions=instructions,
    store=False
)

print(response.output[0].content[0].text)

LLM response:

Animal: “Penguin”
Category: Bird


user_prompt = "Cow"

LLM response:

Animal: “Cow”
Category: Mammal


user_prompt = "Bee"

LLM response:

Animal: “Bee”
Category: Insect


Chain-of-Thought Prompting

Chain-of-Thought (CoT) Prompting is a technique where you guide LLMs to reason step-by-step by explicitly asking it to explain its thinking process before giving the final answer.

Instead of giving a direct answer, the prompt encourages the model to “think out loud” - mimicking how humans solve a problem by breaking them down into logical steps. This is especially useful for math, logic, and reasoning-heavy tasks, where simply asking the question might not lead to the correct answer.

Advanced LLMs have been trained on data that includes examples of Chain-of-Thought (CoT) style reasoning. As a result, they can sometimes produce CoT-like outputs spontaneously, even without explicit instructions.

So, does this mean learning CoT is no longer valuable?

Not at all. Understanding CoT is still highly relevant today. For more complex problems — especially those with traps or multiple reasoning steps — relying on implicit reasoning can lead to incorrect answers. Explicitly prompting the model to reason step-by-step helps ensure more consistent and accurate results.

Note:
For reasoning models like o3 or o4-mini, you should avoid giving explicit CoT instructions. These models perform step-by-step reasoning internally. When prompting reasoning models, it’s best to keep your instructions simple and direct.

For more details, see: How to prompt reasoning models effectively

Example

This time, the problem the model will solve is an arithmetic word problem. In the problem, there appear not only numbers, but also descriptive stories. It is not actually difficult, but LLMs don’t always solve these kinds of problems correctly. Especially if the models try to provide a direct response without reasoning. Let’s look at it.

instructions = "Answer in a word."

user_prompt = """
A delivery truck carries 4 boxes of apples.
Each box contains 18 red apples and 12 green apples.
On the way, 10 red apples fall out from one of the boxes.
Later, the driver picks up 2 more boxes,
each with 15 red apples and 10 green apples.
How many red and green apples does the truck have now?
"""

response = client.responses.create(
    model="gpt-4o-mini",
    input=[{"role": "user", "content": user_prompt}],
    instructions=instructions,
    store=False
)

print(response.output[0].content[0].to_json(indent=2))

LLM response:

Red: 70, Green: 48.

Can you determine if this is correct?

Actually this response is, … incorrect.

So next, let’s provide step by step thinking example to the model.

instructions = """
Question:
A fruit vendor loads 5 baskets of oranges onto a cart.
Each basket contains 20 big oranges and 10 small oranges.
On the way, 8 big oranges fall out from one of the baskets.
Later, the vendor buys 3 more baskets, each containing 18 big oranges and 12 small oranges.
How many big and small oranges does the cart have now?

Answer:
1. First, there are 5 baskets,
each with 20 big oranges and 10 small oranges.

Total big oranges: 20 × 5 = 100
Total small oranges: 10 × 5 = 50

2. 8 big oranges fall out from one of the baskets.

New total big oranges: 100 - 8 = 92

3. The vendor buys 3 more baskets.
Each basket has 18 big oranges and 12 small oranges.

Additional big oranges: 18 × 3 = 54
Additional small oranges: 12 × 3 = 36

4. Add everything together.

Final big oranges: 92 + 54 = 146
Final small oranges: 50 + 36 = 86
"""

user_prompt = """
A delivery truck carries 4 boxes of apples.
Each box contains 18 red apples and 12 green apples.
On the way, 10 red apples fall out from one of the boxes.
Later, the driver picks up 2 more boxes,
each with 15 red apples and 10 green apples.
How many red and green apples does the truck have now?
"""

response = client.responses.create(
    model="gpt-4o-mini",
    input=[{"role": "user", "content": user_prompt}],
    instructions=instructions,
    store=False
)

print(response.output[0].content[0].text)

LLM response:

To find out how many red and green apples the truck has now, we can follow these steps:

  1. Initial Count of Apples:
  • There are 4 boxes, each containing 18 red apples and 12 green apples.
  • Total red apples from 4 boxes: ( 18 \times 4 = 72 )
  • Total green apples from 4 boxes: ( 12 \times 4 = 48 )
  1. Apples that Fell Out:
  • 10 red apples fall out from one of the boxes.
  • New total of red apples: ( 72 - 10 = 62 )
  1. Adding More Boxes:
  • The driver picks up 2 more boxes, each containing 15 red apples and 10 green apples.
  • Additional red apples from 2 boxes: ( 15 \times 2 = 30 )
  • Additional green apples from 2 boxes: ( 10 \times 2 = 20 )
  1. Final Count of Apples:
  • Final total of red apples: ( 62 + 30 = 92 )
  • Final total of green apples: ( 48 + 20 = 68 )

Final Summary:

  • Red apples: 92
  • Green apples: 68

This time… it’s correct! :-)

As you can see, instead of answering directly, the model successfully arrived at the correct answer. This is the power of Chain-of-Thought Prompting technique.

Example: Zero-Shot Chain-of-Thought Prompting

Next, let’t look at Zero-Shot Chain-of-Thought Prompting. In the previous example, I provided detailed instructions using Few-Shot Prompting. But there is an easier way to guide LLMs to think step by step.

How? Just ask.

instructions = "Let's think step by step."

user_prompt = """
A delivery truck carries 4 boxes of apples.
Each box contains 18 red apples and 12 green apples.
On the way, 10 red apples fall out from one of the boxes.
Later, the driver picks up 2 more boxes,
each with 15 red apples and 10 green apples.
How many red and green apples does the truck have now?
"""

response = client.responses.create(
    model="gpt-4o-mini",
    input=[{"role": "user", "content": user_prompt}],
    instructions=instructions,
    store=False
)

print(response.output[0].content[0].text)

Let’s break it down step by step:

  1. Calculate the total number of apples in the original 4 boxes:
  • Each box has 18 red apples and 12 green apples.
  • For 4 boxes:
    • Red apples: ( 4 \times 18 = 72 )
    • Green apples: ( 4 \times 12 = 48 )

So initially, there are:

  • Red apples: 72
  • Green apples: 48
  1. Account for the 10 red apples that fell out:
  • New count of red apples: ( 72 - 10 = 62 )

So now, after the apples fell out:

  • Red apples: 62
  • Green apples: 48
  1. Calculate the total apples in the 2 new boxes:
  • Each of the 2 new boxes has 15 red apples and 10 green apples.
  • For 2 boxes:
    • Red apples: ( 2 \times 15 = 30 )
    • Green apples: ( 2 \times 10 = 20 )
  1. Add the new apples to the truck:
  • New count of red apples: ( 62 + 30 = 92 )
  • New count of green apples: ( 48 + 20 = 68 )
  1. Final totals:
  • Red apples: 92
  • Green apples: 68

Thus, the truck now has 92 red apples and 68 green apples.

Once again, correct answer. So now, we can get a reasoned response just by asking, “Let’s think step by step”.

This simple but powerful technique is highly recommended for complicated tasks.


Wrap up

In this article, we introduced what prompt engineering is, along with some basic techniques using Python code.

These techniques are just the basics.

In the field of Prompt Engineering, more advanced approaches are being developed every day. You can find more techniques at the following site:

I hope you enjoyed this article.

Keep on building.