Leveraging AI for UI and Frontend Software Generation: Evaluating LLM prompt engineering Libraries Flow-Prompt and LangChain

Nikita Bogomolski
12 min readApr 12, 2024

--

Designed by Freepik

The state of the market

Prompt engineering is the process of designing and optimizing prompts to improve the performance of large language models (LLMs) in natural language processing tasks. The goal of LLM prompt engineering is to improve the accuracy, relevance, and coherence of the LLM’s output by designing prompts that are more aligned with the task at hand.

One of the use cases for machine learning that is rapidly becoming widely-adapted is code generation. Since code is basically a machine-interpretable implementation of algorithms (sequence of steps aimed at achieving a distinct goal), there are few general approaches to solving above-medium complexity tasks. As a byproduct of that, businesses turn to software companies to design and develop software tailored for their specific needs. But what if there was a different way, an AI-driven approach to software design and implementation?

When a person is developing software, they are doing just the tasks even general-purpose LLMs could be doing (and are actually great at) such as reading documentation, scouring through internet looking for answers, tirelessly trying to fix their code and, of course, implementing algorithms. With time, the knowledge in their subject area compounds and they can do some tasks more and more efficiently.

Notice the resemblance?

Yet there is one core difference when it comes to generative AI and human problem solving:
When a human-being is solving a problem, they break it down into smaller ones up until the point the objectives are almost trivial. I would go as far as to say it’s “The pillar of software engineering”. When the bigger picture is comprised of smaller, simple tasks, then it becomes easier for the person to tackle it with analytical thinking or experience. Then and there the human brain can point to the specific route to reach the desired outcome.
When it comes to LLMs, we can`t necesserily know for sure what path for resolution they might take. Therefore, the only way for us to achieve a precise goal is to steer the model into the right direction and build upon its responses. It will both improve context clarity and move us closer to specificity.

All things considered, I think it`s fair to say software engineering is shifting closer to automatization. With the rise of publicly-accessible and open-source language models, many versatile and featureful tools have emerged that could be handy to an average programmer. Today I would like to take this opportunity to compare two open-source LLM integration python libraries: Flow Prompt and LangChain.

These contestants will be tested from the standpoint of achieving a special task I came up with: UI & Frontend software Generation. The peculiarities of the chosen assignment include:

  • UI generation with promt engineering;
  • State management logic;
  • Component decomposition;
  • Constructability of generated architecture;
  • Library ease of use.

I previously hadn`t had experience with prompt engineering, so this article will be most valuable to newcomers :).

The application chosen for the competition is a chat app.

Both libraries were tested on gpt-3.5 model for proper comparison. With that, there`s a common flow to both implementations:

1. I defined helper classes TreeNode and FileManager for a data structure and result output respectively.
2. FrontendAgent class has been defined as well for case-specific operations.
3. Main function implements the basic logic of frontend generation with prompt engineering. The algorithm is as follows:
1. Generate project folder structure tab-formatted tree
2. Load the output into an actual data structure
3. Grab the components subtree and search for leaves
4. Generate components based on their names and store them in an appropriate folder
5. Do the same to pages subtree, but use a specific prompt that emphasizes the usage of existing components
6. Store the pages relative to components

Clearly, there could be a few worthy additions off the top. I decided to stick to the functional part of things for the sake keeping the algorithm simple.

Flow Prompt

Source: Flow Prompt

Flow Prompt is a concise open-source library made for efficient prompt management in production applications. It leverages local models such as OpenAI and AzureOpenAI. There is not much information on this library out there because of its seemingly scarce functionality when compared to market leaders and/or competitors. But as far as I`m concerned more lightweight does not always mean worse.

As a prerequisite the library was to be installed using pip:

pip install flow-prompt

As for the features of the library I`d utilized during development one could use:

  • FlowPrompt class itself. Represents the llm instance based on the keys passed.
  • PipePrompt utility. As the name suggests, a variation of the prompt template.
  • AIModelsBehaviour. Defines model`s behavior.

With these 3 simple tools and enough context clarity specific tasks can be achieved.

Here`s what I managed to put together:

from flow_prompt import FlowPrompt, PipePrompt
from flow_prompt import AIModelsBehaviour, AzureAIModel, OpenAIModel, AttemptToCall
from flow_prompt import OPENAI_GPT3_5_TURBO_0125_BEHAVIOUR
from flow_prompt import C_128K
import os, sys
import dotenv
import json

from tree import TreeNode, build_tree_from_indented_text
import file_manager


class FrontendAgent:
def __init__(self) -> None:
azure_keys = json.loads(os.getenv("AZURE_KEYS", "{}"))
self.__llm_instance = FlowPrompt(azure_keys=azure_keys)
self.__project_prompt = PipePrompt('generate_react_project_structure')
self.__project_prompt.add(content='Create the project folder structure for a {app_name} web application \
in React. Use double spaces for indentation and no other symbols.\
Make sure there`s only one root directory src which includes components and pages directories.')
self.__component_prompt = PipePrompt('generate_react_component')
self.__component_prompt.add(content='write a react component for a {component_name}')
self.__page_prompt = PipePrompt('generate_react_page_aggregate_component')
self.__page_prompt.add('write a react component for a {page_name} page. You may use {component_list} components \
from {comp_dir_path} if any of these belong on the page')
self.behavior = OPENAI_GPT3_5_TURBO_0125_BEHAVIOUR


def generate_project_structure(self, proj_name: str) -> str:
proj_context = {'app_name': proj_name}
structure_response = self.__llm_instance.call(self.__project_prompt.id, proj_context, self.behavior)
return structure_response.content


def generate_component(self, comp_name: str) -> str:
comp_context = {'component_name': comp_name}
comp_response = self.__llm_instance.call(self.__component_prompt.id, comp_context, self.behavior)
return comp_response.content

def generate_page(self, page_name: str, exsisting_components: list[str], path_to_components: str) -> str:
ctx = {'page_name': page_name, 'component_list': exsisting_components, 'comp_dir_path': path_to_components}
page_response = self.__llm_instance.call(self.__page_prompt.id, ctx, self.behavior)
return page_response.content

def generate_component_batch(self, dir_path: str, component_names: list[str]):
for component_name in component_names:
code = self.generate_component(component_name)
file_manager.create_file(dir_path, f'{component_name}.js', code)

def generate_page_batch(self, dir_path: str, page_names: list[str], children_components: list[str], path_to_components: str):
for page_name in page_names:
code = self.generate_page(page_name, children_components, path_to_components)
file_manager.create_file(dir_path, f'{page_name}.js', code)


def extract_file_name_no_ext(full_name: str, extension: str) -> str:
return full_name.split(extension)[0]

def extract_component_names(agent: FrontendAgent, proj_tree: TreeNode) -> list[str]:
component_names = []
for component_file in proj_tree.children:
# Check if it's a leaf
if len(component_file.children) == 0:
if component_file.name.endswith('js') or component_file.name.endswith('jsx'):
component_name = extract_file_name_no_ext(component_file.name, '.js')
component_names.append(component_name)
else:
component_names.extend(extract_component_names(agent=agent, proj_tree=component_file))
return component_names


if __name__ == '__main__':
dotenv.load_dotenv(dotenv.find_dotenv())
agent = FrontendAgent()
proj_structure = agent.generate_project_structure(proj_name='chat app')

proj_tree = build_tree_from_indented_text(proj_structure)
proj_tree.pre_order_traverse()

components_dir_subtree = proj_tree.find_node_by_name('components')
pages_dir_subtree = proj_tree.find_node_by_name('pages')
if components_dir_subtree is None:
raise Exception('components dir is missing')


component_names = extract_component_names(agent=agent, proj_tree=components_dir_subtree)
pages_names = extract_component_names(agent=agent, proj_tree=pages_dir_subtree)
print(component_names)
print(pages_names)

components_dir_path = file_manager.create_overwrite_directory('./temp', 'components')
pages_dir_path = file_manager.create_overwrite_directory('./temp', 'pages')
agent.generate_component_batch(components_dir_path, component_names)
agent.generate_page_batch(pages_dir_path, pages_names, component_names, '../components')

Let me walk you through each step of the thought process:

  • I began building functionality with it`s core class FrontendAgent. The constructor specifies base LLM (I used keys to azure OpenAI model) diverging into multiple prompts made to serve a distinct purpose each. I used the default gpt-3.5 behavior from the library:
class FrontendAgent:
def __init__(self) -> None:
azure_keys = json.loads(os.getenv("AZURE_KEYS", "{}"))
self.__llm_instance = FlowPrompt(azure_keys=azure_keys)
self.__project_prompt = PipePrompt('generate_react_project_structure')
self.__project_prompt.add(content='Create the project folder structure for a {app_name} web application \
in React. Use double spaces for indentation and no other symbols.\
Make sure there`s only one root directory src which includes components and pages directories.')
self.__component_prompt = PipePrompt('generate_react_component')
self.__component_prompt.add(content='write a react component for a {component_name}')
self.__page_prompt = PipePrompt('generate_react_page_aggregate_component')
self.__page_prompt.add('write a react component for a {page_name} page. You may use {component_list} components \
from {comp_dir_path} if any of these belong on the page')
self.behavior = OPENAI_GPT3_5_TURBO_0125_BEHAVIOUR
  • From there, I included some wrapper functions in order to provide a more understandable interface to the user:
 def generate_project_structure(self, proj_name: str) -> str:
proj_context = {'app_name': proj_name}
structure_response = self.__llm_instance.call(self.__project_prompt.id, proj_context, self.behavior)
return structure_response.content


def generate_component(self, comp_name: str) -> str:
comp_context = {'component_name': comp_name}
comp_response = self.__llm_instance.call(self.__component_prompt.id, comp_context, self.behavior)
return comp_response.content

def generate_page(self, page_name: str, exsisting_components: list[str], path_to_components: str) -> str:
ctx = {'page_name': page_name, 'component_list': exsisting_components, 'comp_dir_path': path_to_components}
page_response = self.__llm_instance.call(self.__page_prompt.id, ctx, self.behavior)
return page_response.content
  • Since there`s no built-in batch processing for flow-prompt, I implemented it myself with O(n) complexity. The minimalism of the library allowed it to iterate over each component separately without losing too much time.
  • main() function is used to illustrate the above described task approach.

The results are in, what do we have?

The project tree looks simple yet solid, responsibilities are well-distributed. Wait until we get to code.

src
components
ChatRoom.js
ChatInput.js
ChatMessage.js
pages
Home.js
ChatPage.js
Login.js

Let`s take a look at root component of this iteration — Home.js:

import React from "react";
import ChatRoom from "../components/ChatRoom";
import ChatInput from "../components/ChatInput";
import ChatMessage from "../components/ChatMessage";

const HomePage = () => {
return (
<div>
<h1>Welcome to the Home Page</h1>
<ChatRoom>
<ChatMessage author="Alice" message="Hello, how are you?" />
<ChatMessage author="Bob" message="I'm good, thanks for asking!" />
</ChatRoom>
<ChatInput />
</div>
);
};

export default HomePage;

It aggregates all 3 subcomponents as intented. All markup & prop logic seems valid. What is there to the state management side of things though?

import { useState } from "react";
import { Send } from "react-lucide";
import "tailwindcss/tailwind.css";

const ChatInput = ({ onSendMessage }) => {
const [message, setMessage] = useState("");

const handleChange = (e) => {
setMessage(e.target.value);
};

const handleSubmit = (e) => {
e.preventDefault();
if (message.trim() !== "") {
onSendMessage(message);
setMessage("");
}
};

return (
<form
onSubmit={handleSubmit}
className="flex items-center justify-between p-4 bg-gray-200"
>
<input
type="text"
value={message}
onChange={handleChange}
placeholder="Type a message..."
className="flex-1 p-2 mr-2 bg-white rounded-lg focus:outline-none"
/>
<button type="submit" className="p-2 bg-blue-500 text-white rounded-full">
<Send size={24} />
</button>
</form>
);
};

export default ChatInput;

This kind of precision blew my mind to be honest. Even though the import of tailwind looks sus, it can probably be fixed via config file. The rest of the code looks amazing. The llm remembered to trim the message to validate it, it included the hook for input state change and applied configuration. I would be pleased if I had a tool that could do this.

LangChain

Source: LangChain

From David to Goliath. If the previous contender represents a small easily-accessible toolbox LangChain is designed to be a comprehensive solution for working with language models, providing a wide range of features and capabilities. It includes tools for data preprocessing, model training, and evaluation, as well as a variety of pre-built models and datasets. LangChain also provides a flexible and extensible architecture that allows users to easily integrate their own models and data.

The library can be installed via pip as well:

pip install langchain

Some theoretical points:

  • Many components that make up the library implement a Runnable interface. It allows the necessary parts to form a pipeline similar to bash cli pipelines.
  • Has a Chat implementation that provides a more human-centric approach.
  • Base class includes various ways to pass data around such as one-time, batch and stream processing.
  • Has built-in data parsers and other tools.
  • Can run in parallel and/or asyncronously.

I`ve managed to assemble somewhat of a well-versed example:

import asyncio
from langchain_openai import OpenAI
from langchain.prompts import PromptTemplate
import dotenv

from tree import TreeNode, build_tree_from_indented_text
import prompt_templates
import file_manager

dotenv.load_dotenv(dotenv.find_dotenv())


class FrontendAgent:
def __init__(self):
self.__llm_instance = OpenAI(max_tokens=3900)
self.__component_prompt_template = PromptTemplate(input_variables=['component'], template='Write a React functional component for {component}. Please use tailwind and Lucide icons. Do not use descriptions')
self.__project_prompt_template = PromptTemplate(input_variables=['project_name'], template=prompt_templates.project_structure_template)
self.pages_prompt = PromptTemplate(input_variables=['page_name', 'project_name', 'components'], input_types={'page_name': str, 'project_name': str, 'components': list[str]}, template='Create a {page_name} \
react page component for {project_name}. You may use {components} if any of these belong on the page. Don`t explain anything, just write code. All components are located in ../components directory')
self.pages_chain = self.pages_prompt | self.__llm_instance

def generate_project_structure(self, project_name: str):
prompt = self.__project_prompt_template.format(project_name=project_name)
return self.__llm_instance.invoke(prompt)

def invoke_component_code(self, name: str) -> str:
component_prompt = self.__component_prompt_template.format(component=name)
code = self.__llm_instance.invoke(component_prompt)
return code

def generate_component_code(self, prompts: list[str]) -> list[str]:
component_prompts = [self.__component_prompt_template.format(component=prompt) for prompt in prompts]
generated_code = self.__llm_instance.generate(component_prompts)
return list(map(lambda comp: comp[0].text, generated_code.generations))

def extract_file_name_no_ext(full_name: str, extension: str) -> str:
return full_name.split(extension)[0]

def generate_components(agent: FrontendAgent, proj_tree: TreeNode) -> list[str]:
component_names = []
for component_file in proj_tree.children:
# Check if it's a leaf
if len(component_file.children) == 0:
if component_file.name.endswith('js') or component_file.name.endswith('jsx'):
component_name = extract_file_name_no_ext(component_file.name, '.js')
component_names.append(component_name)
else:
component_names.extend(generate_components(agent=agent, proj_tree=component_file))
return component_names


async def main():
agent = FrontendAgent()
project_name = 'chat app'
proj_frame = agent.generate_project_structure(project_name)

print('TREE')
project_tree = build_tree_from_indented_text(proj_frame)
project_tree.pre_order_traverse()

components_dir = project_tree.find_node_by_name(target_name='components')
components_dir.pre_order_traverse()

tree_components = generate_components(agent=agent, proj_tree=components_dir)

# Generate components
comp_dir_path = file_manager.create_overwrite_directory('./temp', 'components')

# * batch generate example
component_codes = agent.generate_component_code([f'a {component_name} for {project_name}'] for component_name in tree_components)
for component_code, component_name in zip(component_codes, tree_components):
file_manager.create_file(comp_dir_path, f'{component_name}.js', component_code)

# Aggregate into pages
pages_dir_subtree = project_tree.find_node_by_name(target_name='pages')
tree_pages = None
if pages_dir_subtree is not None:
tree_pages = generate_components(agent=agent, proj_tree=pages_dir_subtree)

gen_pages = await agent.pages_chain.abatch([{'page_name': curr_page, 'project_name': project_name, 'components': tree_components} for curr_page in tree_pages])
page_dir = file_manager.create_overwrite_directory('./temp', 'pages')
for page_code, page_name in zip(gen_pages, tree_pages):
file_manager.create_file(page_dir, f'{page_name}.js', page_code)


if __name__ == '__main__':
asyncio.run(main())
  • Starting from FrontendAgent itself, I`ve used basic PromptTemplates for each part. They are simple, can be typed and positional. I struggled a lot to create an AzureOpenAI instance (could be skill issues), but never got it to work. I also built a chain for pages to illustrate the might of the framework:
class FrontendAgent:
def __init__(self):
self.__llm_instance = OpenAI(max_tokens=3900)
self.__component_prompt_template = PromptTemplate(input_variables=['component'], template='Write a React functional component for {component}. Please use tailwind and Lucide icons. Do not use descriptions')
self.__project_prompt_template = PromptTemplate(input_variables=['project_name'], template=prompt_templates.project_structure_template)
self.pages_prompt = PromptTemplate(input_variables=['page_name', 'project_name', 'components'], input_types={'page_name': str, 'project_name': str, 'components': list[str]}, template='Create a {page_name} \
react page component for {project_name}. You may use {components} if any of these belong on the page. Don`t explain anything, just write code. All components are located in ../components directory')
self.pages_chain = self.pages_prompt | self.__llm_instance

def generate_project_structure(self, project_name: str):
prompt = self.__project_prompt_template.format(project_name=project_name)
return self.__llm_instance.invoke(prompt)

def invoke_component_code(self, name: str) -> str:
component_prompt = self.__component_prompt_template.format(component=name)
code = self.__llm_instance.invoke(component_prompt)
return code

def generate_component_code(self, prompts: list[str]) -> list[str]:
component_prompts = [self.__component_prompt_template.format(component=prompt) for prompt in prompts]
generated_code = self.__llm_instance.generate(component_prompts)
return list(map(lambda comp: comp[0].text, generated_code.generations))
  • Added the same recursive function for leaf extraction.
  • Used multiple approaches to component creation in order to try them out. As you can see, significant context has to be supplied, even though the task itself is not complex:
# Generate components
comp_dir_path = file_manager.create_overwrite_directory('./temp', 'components')

# * batch generate example
component_codes = agent.generate_component_code([f'a {component_name} for {project_name}'] for component_name in tree_components)
for component_code, component_name in zip(component_codes, tree_components):
file_manager.create_file(comp_dir_path, f'{component_name}.js', component_code)

# Aggregate into pages
pages_dir_subtree = project_tree.find_node_by_name(target_name='pages')
tree_pages = None
if pages_dir_subtree is not None:
tree_pages = generate_components(agent=agent, proj_tree=pages_dir_subtree)

gen_pages = await agent.pages_chain.abatch([{'page_name': curr_page, 'project_name': project_name, 'components': tree_components} for curr_page in tree_pages])
page_dir = file_manager.create_overwrite_directory('./temp', 'pages')
for page_code, page_name in zip(gen_pages, tree_pages):
file_manager.create_file(page_dir, f'{page_name}.js', page_code)

Even though generate() method works in bulk which is handy, it also requires meticulous parsing afterwards. I neglected to look at built-in parser as the scale and format of my prompts were not large enough.

The results:

src
components
Chat
Chat.js
ChatInput.js
ChatMessage.js
ChatRoom.js
Sidebar
Sidebar.js
ContactList.js
ConversationList.js
SearchBar.js
pages
Home
Home.js
WelcomeMessage.js
ChatList.js
SearchBar.js
Login
Login.js
LoginButton.js
SignupButton.js
Signup
Signup.js
SignupForm.js
TermsAndConditions.js
services
AuthService.js
ChatService.js
utils
authValidation.js
formatMessage.js
formatDate.js
App.js
index.js
tailwind.css

The architecture provided by langchain is considerably more sophisticated than flow-prompt ‘s. Let ‘s see what’s in store. Traditionally, we’ll take a look at Home page:

import React from "react";
import Chat from "../components/Chat";
import ChatInput from "../components/ChatInput";
import ChatMessage from "../components/ChatMessage";
import ChatRoom from "../components/ChatRoom";
import Sidebar from "../components/Sidebar";
import ContactList from "../components/ContactList";
import ConversationList from "../components/ConversationList";
import SearchBar from "../components/SearchBar";

const Home = () => {
return (
<div className="home">
<Sidebar />
<ChatRoom />
<Chat />
<ChatInput />
<ChatMessage />
<ContactList />
<ConversationList />
<SearchBar />
</div>
);
};

export default Home;

As opposed to flow-prompt, this is a serious composite page. But can LangChain handle it`s branching? Sidebar.js:

import React from "react";
import { Menu, MenuLabel, MenuList, MenuLink } from "tailwind-react-ui";
import { FiMessageSquare } from "react-icons/fi";
import { FaUserFriends } from "react-icons/fa";
import { IoMdSettings } from "react-icons/io";

const Sidebar = () => {
return (
<Menu>
<MenuLabel>Chat App</MenuLabel>
<MenuList>
<MenuLink>
<FiMessageSquare size={18} />
</MenuLink>
<MenuLink>
<FaUserFriends size={18} />
</MenuLink>
<MenuLink>
<IoMdSettings size={18} />
</MenuLink>
</MenuList>
</Menu>
);
};

export default Sidebar;

This time around, the model took full advantage of the proposed libraries (tailwind and Lucide icons). It even found it`s framework-specific distributions. Despite it’s sheer power, it never got past useState hook either. I’m sure if I had specified the decomposition deeper it would crack it like it’s nothing. Therein lies the power of LangChain. It’s main codebase involves over 2500 contributors and it’s not surprising it’s so scalable.

Conclusion

Both Flow Prompt and LangChain are open-source libraries that can be used to generate a chat app UI using prompt engineering. Flow Prompt is a more lightweight and simple library, while LangChain is more comprehensive and provides a wider range of features and capabilities.

The choice between the two libraries will depend on the specific needs and goals of the user. If you are looking for a simple and efficient way to LLMs for your business, Flow Prompt may be the better choice due to its efficiency and ease of use. If you are looking for a more multipurpose solution with a wider range of features and capabilities, LangChain may be the better choice.

As for my project, I felt LangChain was too much for its current scale. It took me hours to explore the possibilities of the library. At the same time Flow Prompt lacked worthy features such as batch processing, comprehensive documentation and a little abstraction.

--

--

No responses yet