LangGraph: Create an Agent from Scratch

TeeTracker
3 min readMar 6, 2024

--

Use LangGraph to navigate the Agent instead of the traditional while-true loop.

Here is a Python notebook. It not only implements the Agent using the while-true approach, but also recently added the use of LangGraph.

This article is not meant to compare traditional while-true and LangGraph. If you are used to while-true, there is no rejection here. In fact, LangGraph can often replace loop in practical applications.

The whole idea is as follows:

Note: Essentially, Agent is a human-machine chat process with “memory”. Throughout the process, always be mindful of whether LLM needs to use the client’s tool, that is, whether tool_calls exist.

There are a total of three nodes in the graph:

workflow.add_node("run_agent", run_agent)
workflow.add_node("run_tool", run_tool)

workflow.set_entry_point("run_agent")
workflow.add_edge("run_tool", "run_agent")

workflow.add_conditional_edges(
"run_agent", # start node name
continue_next, # decision of what to do next AFTER start-node, the input is the output of the start-node
{ # keys: return of continue_next, values: next node to continue
"to_run_tool": "run_tool",
"to_finish": END,
},
)
  • run agent: The entry point of the graph, the goal is to interact with the LLM.
def run_agent(state: GraphState) -> Dict[str, Any]:
chat_response: ChatResponse = model.chat(state["messages"], tools=openai_tools)
ai_message: ChatMessage = chat_response.message

pretty_print("AI messsaging", ai_message)

state["messages"].extend([ai_message])
return {"messages": state["messages"], "chat_response": chat_response}
  • run tool: Decide whether to use the tool (to_run_tool) based on the information provided by the LLM interaction: if tool_calls exist.
    continue_next()
    is specifically designed for the entry point. All actions after the entry point are defined in this function. It returns the names representing the subsequent actions, which will inform LangGraph which node will be executed as the next action. Here we have to_run_tool and to_finish, representing: either execute node run_tool, or move towards END.
    The add_conditional_edges() function of the workflow above connects the entry point with the continue_next() function.
def continue_next(
state: GraphState,
) -> Literal["to_call_tool_func", "to_end"]:

def _should_continue(chat_response: ChatResponse) -> bool:
"""To decide whether a tool function will be called (true) or not."""
return (
chat_response.message.additional_kwargs.get("tool_calls", None) is not None
)

if _should_continue(state["chat_response"]):
state["tool_call"] = state["chat_response"].message.additional_kwargs[
"tool_calls"
][0]
return "to_run_tool"
else:
return "to_finish"

def run_tool(state: GraphState) -> Dict[str, Sequence[ChatMessage]]:
tool_call = state["tool_call"]

func_id: str = tool_call.id
func_name: str = tool_call.function.name
args_json: str = tool_call.function.arguments

func: FunctionTool = func_tools_dict.get(func_name)
res: ToolOutput = func(**json.loads(args_json))

pretty_print("Ran tool", res)

func_message = ChatMessage(
role=MessageRole.TOOL,
content=str(res),
name=func_name,
additional_kwargs={
"tool_call_id": func_id,
"name": func_name,
},
)

state["messages"].extend([func_message])
return {"messages": state["messages"]}
  • END: At the end of the graph, the node is reached when LLM no longer needs to execute the tool. The END is a predefined node by LangGraph.

Then, to start things off, send an initial message and add it to the chat history (messages):

def create_messages(human_input: str) -> Sequence[ChatMessage]:
"""Create a sequence of ChatMessage from a string input."""
messages: Sequence[ChatMessage] = [
ChatMessage(
role=MessageRole.SYSTEM,
content=("You are an assisant to perform the user input."),
),
ChatMessage(
role=MessageRole.USER,
content=("{input}"),
),
]
prompt_template: ChatPromptTemplate = ChatPromptTemplate(messages)
return prompt_template.format_messages(input=human_input)

result = app.invoke({"messages": create_messages(human_input=human_input)})

By the way, we define the state of a graph as follows:

class GraphState(TypedDict):
messages: Optional[Sequence[ChatMessage]] = (
None # the history of the interaction with model
)
chat_response: Optional[ChatResponse] = None # the latest response of model
tool_call: Optional[ChatCompletionMessageToolCall] = (
None # the tool that the model requires
)

The following section is provided for reference purposes only. This is accomplished using a while-true loop to achieve the content above:

def __call__(self, human_input: str) -> ChatMessage:
pretty_print("Agent on start.")
messages: Sequence[ChatMessage] = self.chat_prompt_template.format_messages(
input=human_input
)
self.chat_history.extend(messages)
while True:
chat_response: ChatResponse = self.model.chat(
messages, tools=self.openai_tools
)
ai_message: ChatMessage = chat_response.message
pretty_print("AI messsaging", ai_message)

self.chat_history.extend([ai_message])
if not self.should_continue(chat_response):
pretty_print("Agent on stop.")
return ai_message
else:
func_message = self.run_tool(
ai_message.additional_kwargs["tool_calls"][0]
)
self.chat_history.extend([func_message])
messages = self.chat_history
logger.info("Agent in continue.")

--

--

TeeTracker
TeeTracker

No responses yet