(make.com) Translation Agent via Reflection Workflow

TeeTracker
4 min readJul 12, 2024

--

Lately I’ve been checking out the reflection workflow, a kind of agentic workflow, by Andrew Ng’s team, which enhances the effectiveness of LLM machine translation. This article is an introduction to how it’s implemented using make.com.

Intro

The source code repository is from https://github.com/andrewyng/translation-agent/tree/main. I first got interested in this flow workflow these two talks:
- What’s next for AI agentic workflows ft. Andrew Ng of AI Fund (youtube.com)
- Andrew Ng On AI Agentic Workflows And Their Potential For Driving AI Progress (youtube.com)

By using LLM and iteratively calling it to enhance the expected output, similar to Corrective RAG and Adaptive RAG, those are type of agentic workflows, a way of self-reflection and correction in the “solving” path. Personally, I think whether it’s RAG or machine translation (LLM based), as long as it relies on an LLM project, a self-correcting mechanism is inevitable, just like the achievements of Andrew’s team. Different paths lead to the same goal, so a continuously corrective LLM project is the default action.

Method

To quickly get to the point, I’m not going to copy the research team’s outcome here. You can check the GitHub repo above for details. What I want to tell you is that I used make.com to implement the basic idea, simplifying the multi-chunk part in the original repo by directly reading the text, that is, one chunk.

Input

I created an XLS document as input with the following columns:

  • from: Source language.
  • to: Target language.
  • country: Text origin country.
  • text: Text.

Output

We’ve got an XLS file as output here with the following columns:

  • from: Source language.
  • to: Target language.
  • country: Text origin country.
  • origin: Text before translation.
  • translated: Text after translation.
  • expert linguist suggestion: Suggestions for how the model can correct and improve.

Tool

www.make.com It’s a type of RPA tool, probably an online RPA tool for the AI era, integrating the most popular model interfaces today, making it super easy for us to use them and focus on the workflow without worrying about the underlying architecture.

Implementation

run make.com flow

I used Gemini 1.5 Flash to do all the work, but of course, other LLMs could have done it too. Just like how we did it in Prof. Ng’s team, we used our model three times to achieve:

1. The initial translation.
2. Corrections made by the model to the initial translation.
3. The initial translation plus corrections, resulting in the final translation.

from input workbook, through workflow, into output workbook

I already mentioned, the input is the XLS above, and the output is the XLS below, more visual. I display it in split screen, but that’s not the main focus.

In summary

In the LLM model, zero-shot queries are not the best choice. According to Prof. Ng’s research, using an agentic workflow can improve the quality of the model’s answers, and it has been widely proven that multiple iterations can produce good results.

Patterns

The Prof. Ng team extracted 4 patterns to summarize the workflow (agentic). The most common one here is tool use, that is when we call a model while making a Chatbot, and the model requests the use of a local or a remote tool. Tool here refers to a function or a method, or even a REST API.

And Reflection is basically about avoiding getting the answer in one shot, but instead trying to explore methods and opportunities for correcting the process along the way. Corrective RAG and Adaptive RAG are good example of agentic flow and kind reflection types.

Compare

With the same gpt-3.5, using an agentic workflow to make multiple corrections can result in something that might surpass both the zero-shot gpt-4 and gpt-3.5 itself. Similarly, if gpt-4 is supported by a workflow, the effect will definitely be significant too.

--

--

TeeTracker
TeeTracker

Responses (1)