Implementing LLM Responses with Prediction Guard API


ManyChat, a popular chat automation tool, is used extensively for customer service. This guide covers two methods to integrate LLM responses using the Prediction Guard API. The first method is straightforward, involving a single question without context, and can be set up quickly without manual coding. The second method is more complex, requiring a lambda function (or similar) to process chat requests, interact with Prediction Guard, and respond via ManyChat’s dynamic block. While it’s possible to manage context solely within ManyChat, it requires significant manual effort and you will possibly lose context after a certain amount of responses.

No Context Example

Our goal in this example is to allow the customer to click a button if they need a question answered. This will send a request to the Prediction Guard Chat API endpoint and the response from the API will be sent to the customer as a Telegram message. They can then choose to ask a new question, speak to an agent, or close the conversation.


Video Tutorial:

Here is a video tutorial for more detailed guidance:


  1. Create a New Automation: Begin by setting up a new automation.
  2. Trigger Automation: Select an appropriate context to trigger the automation.
  3. Create a Telegram Menu: Use a telegram send message block to build a menu. Include a “Question?” button to send user queries to Prediction Guard.
  4. User Input Block: Add a block for users to input their questions.

External Request Body

  1. Prompt and Save Response: Prompt users to enter their question and save the response to a custom field (e.g., User_Question1).
  2. External Request Action Block: Set up this block for making HTTP Post requests to with necessary headers and body.

The body should look something like this (make sure to add the user question field to the request):

2 "model": "Neural-Chat-7B",
3 "messages": [
4 {
5 "role": "system",
6 "content": "You are a helpful bot for a company called prediction guard."
7 },
8 {
9 "role": "user",
11 }
12 ]

External Request Body

  1. Clear User Field for Bot_Response1: This should be above the External Request Action.
  2. Test the Response: Ensure that the system works as intended.
  3. Map Response to Custom Field: Link the API response to the Bot_Response1 custom field. (The JSON path should be something like this : $.choices[0].message.content)
  4. Create Response Message Block: Set up a block in ManyChat to relay the response to the user (this should output the Bot_Response1 field).
  5. Provide Additional Options: Include options for users to ask new questions (this would route back to the external request block), speak to an agent, or close the conversation.

After completing this example your flow should look like this: No context finished flow

Include Conversation Context Example

Our goal in this example is to allow the customer to click a button if they need a question answered. However, notably this will include the context of the previous questions. This will send a request to your personal Lambda function url that processes the ManyChat input, makes an API request to PredictionGuard, and formats the response for ManyChat. The ManyChat response will send the message to the customer and also replace/create the text in your context fields in manychat you will create. They can then choose to continue that conversation, ask a new question (which will clear the context fields), speak to an agent, or close the conversation.



  1. Follow Basic Example Steps: Implement steps 1-6 from the no context example.
  2. Use Dynamic Content: Instead of making an External Request Action Block we are actually going to add a “Get Dynamic Content” block:

Dynamic Body Block

  1. Send Data to Lambda Function: Configure the block to send “Full Contact Data” to your Lambda function URL:

Dynamic Body Request

  1. Lambda Function Goals:
  • Parse Incoming Request.
  • Extract Custom Fields.
  • Extract Conversation History which we will save to the User_Text, Bot_Text custom TEXT customer fields you should have made. We will not be using an array type due to current limitations at the time of creating this article. However, these should be stored in an array type format so you can programmatically build the Prediction Guard API chat request.
  • Append the last customer input to the user messages array (User_Text)
  • Prepare Messages for API Request by formatting a request in the format required by the Prediction Guard API
  • Make API Request to Prediction Guard and Process Response
  • Append the Prediction Guard chat response to the bot messages array (Bot_Text)
  • Format ManyChat Response (it must be formatted as noted in this doc, please make sure to note what platform you are using) This will respond to the user and also save over the new User_Text, Bot_Text with the new complete context.

JavaScript Example for Lambda Function

1const https = require("https");
3exports.handler = async (event) => {
4 // Parse the incoming ManyChat request
5 const requestBody = JSON.parse(event.body);
6 const customFields = requestBody.custom_fields;
8 // Extract the latest user question and conversation history
9 const lastUserInput = customFields.convo_placeholder;
10 const userArrayString = customFields.User_Text || JSON.stringify([]);
11 const botArrayString = customFields.Bot_Text || JSON.stringify([]);
13 const userArray = JSON.parse(userArrayString);
14 const botArray = JSON.parse(botArrayString);
16 // Append the latest user input to the user array
17 userArray.push(lastUserInput);
19 // Prepare the messages array for the PredictionGuard API request
20 let messages = [
21 {
22 role: "system",
23 content:
24 "You are a helpful assistant for a toy Company named Walt's Toys. Welcome the customer and only talk about toys!",
25 },
26 // Interleave user and bot messages
27 ...userArray
28 .map((content, i) => [
29 { role: "user", content },
30 { role: "assistant", content: botArray[i] },
31 ])
32 .flat()
33 .filter((msg) => msg.content), // Filter out undefined content
34 ];
36 const apiData = JSON.stringify({ model: "Neural-Chat-7B", messages });
38 const options = {
39 hostname: "",
40 path: "/chat/completions",
41 method: "POST",
42 headers: {
43 "Content-Type": "application/json",
44 "x-api-key": "<YOUR PREDICTION GUARD API KEY>", // Replace with your actual API key
45 },
46 };
48 // Function to make the API request
49 const makeApiRequest = () => {
50 return new Promise((resolve, reject) => {
51 const req = https.request(options, (res) => {
52 let responseString = "";
54 res.on("data", (chunk) => {
55 responseString += chunk;
56 });
58 res.on("end", () => {
59 try {
60 const response = JSON.parse(responseString);
61 if (!response.choices || response.choices.length === 0) {
62 throw new Error(
63 "Invalid response: choices array is missing or empty."
64 );
65 }
66 resolve(response);
67 } catch (error) {
68 reject(error);
69 }
70 });
71 });
73 req.on("error", (error) => {
74 reject(error);
75 });
77 req.write(apiData);
78 req.end();
79 });
80 };
82 try {
83 const apiResponse = await makeApiRequest();
84 const replyMessage = apiResponse.choices[0].message.content;
86 // Append the API response to the bot array
87 botArray.push(replyMessage);
89 // Format the response for ManyChat
90 const manyChatResponse = {
91 version: "v2",
92 content: {
93 type: "telegram", // Adjust as needed
94 messages: [
95 {
96 type: "text",
97 text: replyMessage,
98 },
99 ],
100 actions: [
101 {
102 action: "set_field_value",
103 field_name: "User_Text",
104 value: JSON.stringify(userArray),
105 },
106 {
107 action: "set_field_value",
108 field_name: "Bot_Text",
109 value: JSON.stringify(botArray),
110 },
111 ],
112 quick_replies: [], // Adjust as needed
113 },
114 };
116 return {
117 statusCode: 200,
118 body: JSON.stringify(manyChatResponse),
119 headers: { "Content-Type": "application/json" },
120 };
121 } catch (error) {
122 return {
123 statusCode: 500,
124 body: JSON.stringify({ message: error.message }),
125 headers: { "Content-Type": "application/json" },
126 };
127 }

Response Format for ManyChat (Telegram)

Your response to ManyChat for Telegram should look something like this:

2 "version": "v2",
3 "content": {
4 "type": "telegram",
5 "messages": [
6 {
7 "type": "text",
8 "text": "I'm still doing great, and I'm ready to help you find the perfect toy for your little one. Let me know if you need any recommendations or have any questions. We have a lot of fun toys to explore together!"
9 }
10 ],
11 "actions": [
12 {
13 "action": "set_field_value",
14 "field_name": "User_Text",
15 "value": "[\"How is your day?\",\"How is your day?\"]"
16 },
17 {
18 "action": "set_field_value",
19 "field_name": "Bot_Text",
20 "value": "[\"Hello there! I'm doing great, as I'm always excited to talk about toys. How about you? Are you looking for any specific toys or just browsing? We have a wide variety of options for you to choose from.\",\"I'm still doing great, and I'm ready to help you find the perfect toy for your little one. Let me know if you need any recommendations or have any questions. We have a lot of fun toys to explore together!\"]"
21 }
22 ],
23 "quick_replies": []
24 }
  1. Configure the rest of the flow

    • The customer should be able to continue the conversation. This should just route back to the dynamic request without clearing the user fields.
    • The customer should also be able to ask a new question, which should clear the user fields. This action clears the context of the conversation.
    • Finally, it is best to provide a way for a person to reach a real human agent and close the ticket if they so desire.

After completing this your flow should look like this:

Final context flow

If you followed this example your Telegram bot should be able to respond with the context of the entire conversation:

Final context flow

Happy chatting!