LangChain: Multi-User Conversation Support

TeeTracker
12 min readMar 21, 2024

--

· Intro
· Method
Preconditions for the experiment
Role field
id
content
Apply ConversationBufferMemory and ConversationChain
· Summary
Suggestion
· Code
· Follow-up read

Made by https://chrome.google.com/webstore/detail/sider-chatgpt-sidebar-%20-g/difoiogjjojoaoomphldepapgpbgkhkb/reviews

Intro

By using some methods, make the conversation able to happen among multiple users, achieving the basic of a simple “group conversation”. This article is not about implementing a “group conversation” system, but rather providing a “memory buffer” that supports “group conversation”.

Let AIMessage, HumanMessage, SystemMessage, etc. (BaseMessage) support the possibility of coming from multiple users, in other words, give them an identifier, such as HumanMessage can come from user-1, user-2 …. This is the foundation of multi-user chat, which means our chat history is no longer just one human or ai, it could be a group of them. Find a good way to distinguish different identifiers so that LLM can find the context of the conversation based on the identifier of participations.

Method

We can put extra information, in our case a kind of identifier, into some fields of the message, currently the open fields are content, id, other (which will be put into the additional_kwargs dictionary of the message), check BaseMessage here. Let’s talk about the use of these fields separately.

We can find those fields here.

Preconditions for the experiment

We assume that a chat system maintains a ConversationBufferMemory, and all the conversations we have are in its history, obtained as follows:

memory = ConversationBufferMemory(return_messages=True)
memory.buffer.append(HumanMessage(content="Hello dudes", id="user-1"))
.....
memory.load_memory_variables({})["history"]

output:

[
│ AIMessage(content='This is a Gaming Place'),
│ HumanMessage(content='Hello dudes', id='user-1'),
│ HumanMessage(content='hi', id='user-2')
......
]

Role field

When we create a message using the dictionary method or factory, we can specify it, for example:

ChatMessagePromptTemplate.from_template(
role="human", template=prompt
)
# or
{
"role": "human",
"content": "some content",
}

Most advanced LLMs now mostly support roles: human, user, ai, assistant, or system, While the LangChain framework claims to support roles other than the above, I haven’t come across any like GPT from OpenAI (maybe others would do). So currently, we cannot at least provide the user’s identifier in the role field.

You will receive LLM feedback like the following when using roles other than the standard ones.

ValueError: Unexpected message type: None. Use one of 'human', 'user', 'ai', 'assistant', or 'system'.

⛔️ This approach won’t work.

id

When we construct a message using the constructor, we can specify an id. Check out the documentation for BaseMessage and you’ll see that by default it is none. It seems like we can differentiate between different users by manipulating this field.

For example, if we create multiple HumanMessage objects and add them to a ConversationBufferMemory, using the memory_variables of the latter, we can get a complete list of messages. We can insert this list into the history section of the ChatPromptTemplate .

memory = ConversationBufferMemory(return_messages=True)
....
....
memory.buffer.append(HumanMessage(content="Hello dudes", id="user-1"))
memory.buffer.append(HumanMessage(content="hi", id="user-2"))
memory.buffer.append(HumanMessage(content="yo yo", id="user-3"))
memory.buffer.append(HumanMessage(content="nice to see you", id="user-4"))
memory.buffer.append(HumanMessage(content="hoho dude", id="user-5"))
memory.buffer.append(HumanMessage(content="o lalala", id="user-L"))
memory.buffer.append(HumanMessage(content="guten tag", id="user-XXXXL"))
memory.buffer.append(HumanMessage(content="Let's get started, ok?", id="user-1"))
memory.buffer.append(HumanMessage(content="YES", id="user-2"))
memory.buffer.append(HumanMessage(content="YEAH....", id="user-3"))
memory.buffer.append(HumanMessage(content="Cool..", id="user-4"))
memory.buffer.append(HumanMessage(content="yup.", id="user-5"))
memory.buffer.append(HumanMessage(content="Great.....", id="user-L"))
memory.buffer.append(HumanMessage(content="alles klar", id="user-XXXXL"))
memory.buffer.append(HumanMessage(content="I think I was the winner", id="user-5"))
....
....
prompt = ChatPromptTemplate.from_messages(
[
SystemMessage(
content=("You are an AI assistant." "You can handle the query of user.")
),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{query}"),
]
)
....
....
cxt_history = memory.load_memory_variables({})["history"]
some_chain.invoke(
{
"history": cxt_history,
"query": "some query",
}
)

⛔️ But, this approach won’t work. When we open LangSmith, we notice that while the prompt section outputs ids and the chain feeds data to the LLM which stays at downstream, NO ids are present in the LLM input.

1st: You see IDs as part of output of filled prompt; 2nd: IDs are not provided for LLM input.

So, we cannot differentiate between any users and any information about them, the LLM cannot see the identifiers as shown above.

When we tried to find out how many ids were involved in the conversation and asked about the comments of user-5, we received a strange “illusion”.

user-5 did not say that at all, obviously, it’s because of the inability to locate the id confusing.

💡Reference code section: Failed approach: Directly use the history of ConversationBufferMemory: cxt_history

content

  • 👍 dictionary(cxt_dict)

It worked ✅

We can manipulate the content, literally meaning the text content of the message. But since it can be any text content, we can also design a fixed format or structure to include things like an identifier, role, and text information.

def convert_memory_to_dict(memory: ConversationBufferMemory) -> List[Dict[str, str]]:
"""Convert the memory to the dict, role is id, content is the message content."""
res = [
"""The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context.
If the AI does not know the answer to a question, it truthfully says it does not know.

Notice: The 'uid' is user id, 'role' is user role for human or ai, 'content' is the message content.

"""
]
history = memory.load_memory_variables({})["history"]
content_fmt = """{{
uid:"{uid}"
role:"{role}"
content:"{content}"
}}
"""
for hist_item in history:
role = "human" if isinstance(hist_item, HumanMessage) else "ai"
res.append(
{
"role": role,
"content": content_fmt.format(
content=hist_item.content,
uid=hist_item.id if role == "human" else "",
role=role,
),
}
)
return res

In the for loop, we try to read the role, id, and content of each message from the history of ConversationBufferMemory, then pack them into a dictionary, put it into a list, so the original ConversationBufferMemory is mapped to a list of dictionaries.

Content of dictionary

(
"role": "human", "ai", "system" ....
"content": any text
)

LangChain can transform this into a proper message with the base class BaseMessage.

When we input this list as history into the defined prompt.

cxt_dict = convert_memory_to_dict(memory)

build_chain_without_parsing(model).invoke(
{
"history": cxt_dict,
"query": .....,
}
)

We see here that LangChain is still able to automatically convert dictionaries into the corresponding message types. When the role is ai, it’s AIMessage, otherwise it’s HumanMessage.

ChatPromptValue(
│ messages=[
│ │ SystemMessage(content='You are an AI assistant.You can handle the query of user.'),
│ │ HumanMessage(
│ │ │ content="The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. \nIf the AI does not know the answer to a question, it truthfully says it does not know.\n\nNotice: The 'uid' is user id, 'role' is user role for human or ai, 'content' is the message content.\n\n"
│ │ ),
│ │ AIMessage(content='{\nuid:""\nrole:"ai"\ncontent:"This is a Gaming Place"\n}\n'),
│ │ HumanMessage(content='{\nuid:"user-1"\nrole:"human"\ncontent:"Hello dudes"\n}\n'),
│ │ HumanMessage(content='{\nuid:"user-2"\nrole:"human"\ncontent:"hi"\n}\n'),
│ │ HumanMessage(content='{\nuid:"user-3"\nrole:"human"\ncontent:"yo yo"\n}\n'),
│ │ HumanMessage(content='{\nuid:"user-4"\nrole:"human"\ncontent:"nice to see you"\n}\n'),
│ │ HumanMessage(content='{\nuid:"user-5"\nrole:"human"\ncontent:"hoho dude"\n}\n'),
│ │ HumanMessage(content='{\nuid:"user-L"\nrole:"human"\ncontent:"o lalala"\n}\n'),
│ │ HumanMessage(content='{\nuid:"user-XXXXL"\nrole:"human"\ncontent:"guten tag"\n}\n'),
│ │ HumanMessage(content='{\nuid:"user-1"\nrole:"human"\ncontent:"Let\'s get started, ok?"\n}\n'),
│ │ HumanMessage(content='{\nuid:"user-2"\nrole:"human"\ncontent:"YES"\n}\n'),
│ │ HumanMessage(content='{\nuid:"user-3"\nrole:"human"\ncontent:"YEAH...."\n}\n'),
│ │ HumanMessage(content='{\nuid:"user-4"\nrole:"human"\ncontent:"Cool.."\n}\n'),
│ │ HumanMessage(content='{\nuid:"user-5"\nrole:"human"\ncontent:"yup."\n}\n'),
│ │ HumanMessage(content='{\nuid:"user-L"\nrole:"human"\ncontent:"Great....."\n}\n'),
│ │ HumanMessage(content='{\nuid:"user-XXXXL"\nrole:"human"\ncontent:"alles klar"\n}\n'),
│ │ HumanMessage(content='{\nuid:"user-5"\nrole:"human"\ncontent:"I\'m good and the best."\n}\n'),
│ │ HumanMessage(
│ │ │ content='content=\'How many users are involved in this conversation exclude the AI or System messages?\\nAlso provide the list of user ids. The user ids can be any format unique to each user.\\n\\nNotice: \\n\\nGive me a simple result with the only number of users without any instruction text or additional information,\\nkeep the result as simple as possible,ie. 1,2 or 3....\\nOutput format: \\nuser_count=x, x is number of users\\n\\nThe user ids will be saved inside "[]".\\nOutput format: \\nuser_ids=[user_1,.......]\' id=\'user-X\''
│ │ )
│ ]
)

In LangSmith, we can see that all previous conversations are passed into the LLM, including the id and other content (see 2nd image).

We found that all the history content at this point includes content with user identifier, role, and text content. When we observe the input of LLM, these contents can be seen by LLM, and our query also gets a satisfactory answer. For example, when we ask how many people are participating in the conversation and list them out, we can see our expected results in the outputFor example, accurately list the ids of all participants.

Here is also an accurate list of all comments made by user-5

💡Reference code section: Convert all history into a list of dictionaries: cxt_dict

  • additional_kwargs

We can modify the content of the loop, for example, we can create a field in the dict called uid, so that this uid will be placed in the additional_kwargs of the message, LangChain helps us to do this automatically, which is also a dict.

additional_kwargs{"uid":"user-5"}

Check LangSmith:

Change some details in the loop:

def convert_memory_to_dict(memory: ConversationBufferMemory) -> List[Dict[str, str]]:
"""Convert the memory to the dict, role is id, content is the message content."""
res = [
"""The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context.
If the AI does not know the answer to a question, it truthfully says it does not know.

Notice: The 'uid' is user-id, 'role' is user role for human or ai, 'content' is the message content.

"""
]
history = memory.load_memory_variables({})["history"]
for hist_item in history:
role = "human" if isinstance(hist_item, HumanMessage) else "ai"
res.append(
{
"role": role,
"content": hist_item.content,
"uid": hist_item.id if role == "human" else "",
}
)
return res

How to use this code, same as before, it’s shown here.

⛔️ But, this approach won’t work. And it’s so strange, becasue what is sent to the LLM is the content from content of BaseMessage and does not include all predefined data in additional_kwargs,it only works with a few ones that were processed before requesting the model (💡Check to solve in follow-up), I’ve posted about it on both issue and stackoverflow, hoping for an answer in the future.

The reason I can’t figure it out actually comes from LangSmith. When you see the picture, you will still see ids outputs of the filled prompt that LLM’s input is at least being presented on the UI.

💡Reference code section: Failed approach: Convert all history into a list of dictionaries: cxt_dict

  • string(cxt_str)

It worked ✅

Similar to the above approach, but instead of compressing all the content into a list of dictionaries, we concatenate it into a single string, here is what I mean:

def convert_memory_to_str(memory: ConversationBufferMemory) -> str:
"""Convert the memory to str"""
res = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context.
If the AI does not know the answer to a question, it truthfully says it does not know.

Notice: The 'uid' is user id, 'role' is user role for human or ai, 'content' is the message content.
{
"""
history = memory.load_memory_variables({})["history"]
for hist_item in history:
role = "human" if isinstance(hist_item, HumanMessage) else "ai"
res += f"""{{
"uid":"{hist_item.id if role =='human' else ''}",
"role":"{role}",
"content": "{hist_item.content}"
}},
"""
# remove the last comma
res = res[:-2]
res += """
}"""
return res

Instead of the dictionary previously used in the loop, we are using string concatenation, and now we invoke the chain.

cxt_str = convert_memory_to_str(memory)

build_chain_without_parsing(model).invoke(
{
"history": cxt_str,
"query": .....,
}
)

We can observe the output after the prompt is filled. All user conversations, including text content, user id, and role, are compressed into ONE HumanMessage.

⚠️Note that here, both the ai message and the human message are all packaged together. This is the biggest flaw of this approach.

ChatPromptValue(
│ messages=[
│ │ SystemMessage(content='You are an AI assistant.You can handle the query of user.'),
│ │ HumanMessage(
│ │ │ content='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. \nIf the AI does not know the answer to a question, it truthfully says it does not know.\n\nNotice: The \'uid\' is user id, \'role\' is user role for human or ai, \'content\' is the message content.\n{\n{\n"uid":"",\n"role":"ai",\n"content": "This is a Gaming Place"\n},\n{\n"uid":"user-1",\n"role":"human",\n"content": "Hello dudes"\n},\n{\n"uid":"user-2",\n"role":"human",\n"content": "hi"\n},\n{\n"uid":"user-3",\n"role":"human",\n"content": "yo yo"\n},\n{\n"uid":"user-4",\n"role":"human",\n"content": "nice to see you"\n},\n{\n"uid":"user-5",\n"role":"human",\n"content": "hoho dude"\n},\n{\n"uid":"user-L",\n"role":"human",\n"content": "o lalala"\n},\n{\n"uid":"user-XXXXL",\n"role":"human",\n"content": "guten tag"\n},\n{\n"uid":"user-1",\n"role":"human",\n"content": "Let\'s get started, ok?"\n},\n{\n"uid":"user-2",\n"role":"human",\n"content": "YES"\n},\n{\n"uid":"user-3",\n"role":"human",\n"content": "YEAH...."\n},\n{\n"uid":"user-4",\n"role":"human",\n"content": "Cool.."\n},\n{\n"uid":"user-5",\n"role":"human",\n"content": "yup."\n},\n{\n"uid":"user-L",\n"role":"human",\n"content": "Great....."\n},\n{\n"uid":"user-XXXXL",\n"role":"human",\n"content": "alles klar"\n},\n{\n"uid":"user-5",\n"role":"human",\n"content": "I am the winner"\n}\n}'
│ │ ),
│ │ HumanMessage(
│ │ │ content='content=\'How many users are involved in this conversation exclude the AI or System messages?\\nAlso provide the list of user ids. The user ids can be any format unique to each user.\\n\\nNotice: \\n\\nGive me a simple result with the only number of users without any instruction text or additional information,\\nkeep the result as simple as possible,ie. 1,2 or 3....\\nOutput format: \\nuser_count=x, x is number of users\\n\\nThe user ids will be saved inside "[]".\\nOutput format: \\nuser_ids=[user_1,.......]\' id=\'user-X\''
│ │ )
│ ]
)

Look at LangSmith, as expected, we can see all the contents of the conversations compressed into a string in the history (below 1st picture). Now when we query for the list of participants and the conversation records ofuser-5, we can get the results we expected.

In the 2nd, 3rd pictures, we see the previous conversation content being placed into ONE HumanMessage, and our follow-up questions being placed into another.

💡Reference code section: Convert all history into a single string: cxt_string

Apply ConversationBufferMemory and ConversationChain

All the experiments above were done using LCEL, which means we need to write a system prompt, history placeholder, etc. Also, there is no automatic conversation history growing mechanism. Just imagine if we were to build a real conversation system, the system’s history would need to keep growing. We can directly use LangChain’s ConversationBufferMemory, ConversationChain together to complete the task.

Here is the complete code snippet:

memory = ConversationBufferMemory(return_messages=True)
mem_vars = memory.load_memory_variables({})
pretty_print("Memory Variables init", mem_vars)
pretty_print("Memory Variables in str list (buffer_as_str) init", memory.buffer_as_str)

memory.buffer.append(AIMessage(content="This is a Gaming Place"))
mem_vars = memory.load_memory_variables({})
pretty_print("Memory Variables seeded", mem_vars)
pretty_print(
"Memory Variables in str list (buffer_as_str), seeded", memory.buffer_as_str
)

memory.buffer.append(HumanMessage(content="Hello dudes", id="user_1"))
memory.buffer.append(HumanMessage(content="hi", id="user_2"))
memory.buffer.append(HumanMessage(content="yo yo", id="user_3"))
memory.buffer.append(HumanMessage(content="nice to see you", id="user_4"))
memory.buffer.append(HumanMessage(content="glad to see you", id="user_5"))
memory.buffer.append(HumanMessage(content="good luck dudes", id="user_5"))
memory.buffer.append(HumanMessage(content="I'm a great user", id="user_5"))
memory.buffer.append(HumanMessage(content="great to see you", id="user_6"))
# memory.buffer.append(HumanMessage(content="Merci", id="user_7"))
# memory.buffer.append(HumanMessage(content="Danke sehr", id="user_8"))
memory.buffer.append(HumanMessage(content="Merci", id="user_XL"))
memory.buffer.append(HumanMessage(content="Danke sehr", id="user_XXL"))
mem_vars = memory.load_memory_variables({})
pretty_print("Memory Variables", mem_vars)
pretty_print("Memory Variables in str list (buffer_as_str)", memory.buffer_as_str)

model = llm
conversation = ConversationChain(llm=model, memory=memory)
conv_chain_out = conversation.invoke(
input="""How many users are involved in this conversation?
Also provide the list of user ids. The user ids can be any format unique to each user.
Use 'id' as unique identifier for each user.

Notice:

Give me a simple result with the only number of users without any instruction text or additional information,ie. 1,2 or 3....
Output format:
user_count=x, x is number of users

The user ids will be saved inside "[]".
Output format:
user_ids=[user_1,.......]"""
)
pretty_print("conv_chain_out", conv_chain_out)


conv_chain_out = conversation.invoke(
input="""Give the list of all the messages from "user_5" and put them in a "[]" without any instruction text, newlines or additional information.
""",
)
pretty_print("user_5 conv_chain_out", conv_chain_out)

Here’s a question, is it possible that ConversationChain has a mechanism inside that can read ids? I had the same question at the beginning, but with LangSmith’s help, we suddenly understood everything. It turns out that ConversationChain simply copies all previous records into a string, without any conversion, and literally copies the history (including class names and object contents) from ConversationBufferMemory to the history field.

In LangSmith, we observe that, in comparison to our approach, they consolidate both the conversation history and our recent query into a single HumanMessage. Yes, this mirrors what LangChain has achieved.

  • Current conversation is the conversation history
  • Human is our query.
  • 🐞Take a look at the output on the 1st image, where does it seem off?

The final question is unrelevant to the main subject of this article. If you wish, utilize the commented code to substitute the original ones and implement it with the remaining code.

# memory.buffer.append(HumanMessage(content="Merci", id="user_7"))
# memory.buffer.append(HumanMessage(content="Danke sehr", id="user_8"))
memory.buffer.append(HumanMessage(content="Merci", id="user_XL"))
memory.buffer.append(HumanMessage(content="Danke sehr", id="user_XXL"))

💡Reference code section: Multi-user conversation

Summary

In order for LLM to differentiate between different users in a multi-user conversation, we need to compress all messages including user identifiers and text content into a list of dictionaries (cxt-dict) or a concatenated string(cxt-str), in those ways the downstream LLM can see the conversation history, as mentioned earlier. You can also rely solely on the LangChain framework, the LangChain’s ConversationChain uses a similar method, although it has some flaws, but the overall approach is widely accepted and shouldn’t be overshadowed by minor imperfections.

Suggestion

Utilize LangSmith to verify everything while developing with LangChain framework.

Code

Follow-up read

--

--

TeeTracker
TeeTracker

Responses (1)