AI new trend, prompt engineering

3 min readJul 10


Use AI model to produce the outcome, better outcome, and best outcome


In the past few days, I have already created two small tools using ChatGPT.

This is just the most basic use of generative AI, and the common point between the two is that I only made a casual inquiry and then the AI gave me a basically reasonable result. When I say “basically reasonable,” it is because my inquiry does not have any patterns or directions. Perhaps the second application has an inherent direction, such as “checking English syntax and grammar,” etc., but this direction does not have specific templates or it is “too simple.” As a result, the generated results cannot be “in-depth”. This article will discuss how to make structured inquiries, or “prompt engineering.”

New programming style

I personally think that prompting is a new programming approach. Don’t assume that guiding models with natural language is easy. On the contrary, I believe it’s quite the opposite. Natural language programming lacks the syntax of traditional programming languages, which means there are no type checks or any protective mechanisms in place. If the model (AI) receives an inappropriate prompt, the generated results can be completely different from what was expected.

Here is a prompt I have used the diffusion model in computer vision. Although it has brought some surprises, it is not actually my ultimate goal.

prompting steps 👇

LLM based app prompting

Zero-shot inference

good model
old, small model

One shot or two … few shot inference

More than one….

More than one prompt, with two previous examples

In-Context learning (ICL)

When the Few-Shot learning still doesn’t satisfy you, fine-tuning the model becomes necessary.

Prompting process in Computer Vision app

I have an example here, which is to generate a command for creating a glass cup, then perform simple rendering, and finally inject water into it.


In general, the prompting process in CV is similar to automated photoshop. The quality of the predicted images or inpainting depends on the quality of the prompt text and the model. Currently, I have conducted a test based on two diffusion models. The model has been released on

My prompt

  • prompt/generate: a transparent glass cup, empty,no water
  • human-interaction/crop the generate result
  • prompt/inpaint: some lighting,on the glass surface,very warm
  • prompt/inpaint: fill some droplet water
  • prompt/inpaint: some droplet water behind glass cup surface





AI advocate // Computer vision // NeRF // Machine learning // Deep learning // Certificated Tensorflow Developer