(Llama-Index Workflows)Translation Agent via Reflection Workflow
LlamaIndex has recently introduced the “workflow.” In this article, we will utilize this new framework to execute the Reflection Workflow.
Intro
Previously, we completed the reflection workflow using make.com. Check the article for more details:
With the help of the LlamaIndex workflow, we can easily create a closed graph to manage different states of the application. It is a fully event-driven system, allowing us to implement functionality similar to what event-buses did in Android development or something else.
We can consider that the LlamaIndex workflow is an alternative framework compared to the LangGraph of LangChain, for Hello World of LangGraph I have done here.
Method
As before, we should handle the components: init-translation, reflection, and improve translation. In the LlamaIndex workflow, we refer to these as “steps.” This means that each step is one component, for example:
class AgenticTranslationWorkflow(Workflow):
@step()
async def do_init_translation(self, event: StartEvent) -> InitTranslationEvent:
ic(event)
system_message_prompt = "You are an expert linguist, specializing in translation from {source_lang} to {target_lang}."
system_message_tmpl = SystemMessagePromptTemplate.from_template(
template=system_message_prompt
)
human_message_prompt = """This is an {source_lang} to {target_lang} translation, please provide the {target_lang} translation for this text. \
Do not provide any explanations or text apart from the translation.
{source_lang}: {source_text}
{target_lang}:"""
human_message_tmpl = HumanMessagePromptTemplate.from_template(
template=human_message_prompt
)
prompt = ChatPromptTemplate.from_messages(
[system_message_tmpl, human_message_tmpl]
)
model = LOOK_UP_LLM[
st.session_state.get("key_selection_init_translation_model", 0)
]["model"]
chain = prompt | model | StrOutputParser()
translation_1 = chain.invoke(
{
"source_lang": event["source_lang"],
"target_lang": event["target_lang"],
"source_text": event["source_text"],
},
{"configurable": {"session_id": None}},
)
ic(translation_1)
return InitTranslationEvent(
source_lang=event["source_lang"],
target_lang=event["target_lang"],
source_text=event["source_text"],
country=event["country"],
translation_1=translation_1,
)
The decorator must be “step(…)” in order to orchestrate the framework to handle the event “StartEvent.” Here, the “StartEvent” inherits from the “Event” class. The framework will use the methods that have been decorated with “step(…)” and the event class as handlers to process the event.
State termination
There is another special event called “StopEvent” that can be returned by any “step” to inform the workflow to terminate and return the final result. Here is the example:
@step()
async def do_improve_translation(self, event: ReflectionEvent) -> StopEvent:
ic(event)
system_message_prompt = "You are an expert linguist, specializing in translation editing from {source_lang} to {target_lang}."
system_message_tmpl = SystemMessagePromptTemplate.from_template(
template=system_message_prompt
)
human_message_prompt = """Your task is to carefully read, then edit, a translation from {source_lang} to {target_lang}, taking into
account a list of expert suggestions and constructive criticisms.
The source text, the initial translation, and the expert linguist suggestions are delimited by XML tags <SOURCE_TEXT></SOURCE_TEXT>, <TRANSLATION></TRANSLATION> and <EXPERT_SUGGESTIONS></EXPERT_SUGGESTIONS> \
as follows:
<SOURCE_TEXT>
{source_text}
</SOURCE_TEXT>
<TRANSLATION>
{translation_1}
</TRANSLATION>
<EXPERT_SUGGESTIONS>
{reflection}
</EXPERT_SUGGESTIONS>
Please take into account the expert suggestions when editing the translation. Edit the translation by ensuring:
(i) accuracy (by correcting errors of addition, mistranslation, omission, or untranslated text),
(ii) fluency (by applying {target_lang} grammar, spelling and punctuation rules and ensuring there are no unnecessary repetitions), \
(iii) style (by ensuring the translations reflect the style of the source text)
(iv) terminology (inappropriate for context, inconsistent use), or
(v) other errors.
Output only the new translation and nothing else, no instructions, XML tags, headlines etc."""
human_message_tmpl = HumanMessagePromptTemplate.from_template(
template=human_message_prompt
)
prompt = ChatPromptTemplate.from_messages(
[system_message_tmpl, human_message_tmpl]
)
model = LOOK_UP_LLM[
st.session_state.get("key_selection_improve_translation_model", 0)
]["model"]
chain = prompt | model | StrOutputParser()
response = chain.invoke(
{
"source_lang": event.source_lang,
"target_lang": event.target_lang,
"source_text": event.source_text,
"country": event.country,
"translation_1": event.translation_1,
"reflection": event.reflection,
},
{"configurable": {"session_id": None}},
)
translation_2 = (
response if not isinstance(response, list) else response[0]["text"]
)
ic(translation_2)
return StopEvent(
result= .....
)
)
Start workflow
As you can see the workflow should be a class that has been defined inside a class that has inheriate from “Workflow” class. This means we can use the class like normal class in the Python to start the workflow instance, here is the example:
class AgenticTranslationWorkflow(Workflow):
@step()
async def do_init_translation(self, event: StartEvent) -> InitTranslationEvent:
ic(event)....
@step()
async def do_reflection(self, event: InitTranslationEvent) -> ReflectionEvent:
ic(event)....
@step()
async def do_improve_translation(self, event: ReflectionEvent) -> StopEvent:
ic(event)....
to start flow:
workflow = AgenticTranslationWorkflow(timeout=60, verbose=True)
event = {
"source_lang": st.session_state.get("key_selection_source_language", 0),
"target_lang": st.session_state.get("key_selection_target_language", 1),
"source_text": source_text.strip(),
"country": st.session_state.get("key_selection_country", 1),
}
result = await workflow.run(**event)
Notice:
1. We must use the dict-style to create a “StartEvent.” If you check the source code of “run()”, you will find that the event is created by passing a dict object.2. On the contrary, the “StopEvent” must be created manually.
3. The return value of “run()” is exactly the value you assign to the result when you create the “StopEvent” in a “step.”
return StopEvent(
result= .....
)
)
Implementation
Here is an example using Streamlit that implements the entire workflow of the reflection agent translation. For more details, I have pushed the code to GitHub, and you are welcome to check it out and improve upon the idea.
As before, we can use LangChain to implement the flow with the help of LlamaIndex. This time, we can also use the LlamaIndex Workflow to complete the flow with LangChain.