Day 25: Building an MVP – Connecting the dots

On Day 23, CofounderGPT helped me to connect the setup flow. However, even with its help, I faced many minor issues and went down the rabbit hole to solve them. Today, I am trying to connect the essential flow of the application – tooltip creation.
Routing, fetching data, and handling errors
There are many ways to handle routing and data fetching in React. Many people can spend hours arguing about the best to do that and have strong opinions on what you should not do. This (unnecessary) introduction is here because I recently attended a React conference. But in short, if you are here to learn the best way to write React, you are in the wrong place because my only goal at the moment is to finish the MVP as fast as possible while producing manageable code. We’ll refactor it later if anyone starts using the Knowlo application.
However, working on a new project is always an excellent opportunity to test some new things and try the approaches you don’t want to push directly to production on your main project. I wish I had more time to test a few things I never used before. But hopefully, people will start using Knowlo, so I’ll need to refactor some things. Can a CofounderGPT help me with refactoring? Let’s hope we’ll learn that soon.
An important piece of the Knowlo front end application is a router. The React Router 6 has many excellent functionalities that can speed me up and handle some common tasks instead of me.
For example, I can have an application shell for logged-in users and render the app content inside it. Or, even better, I can use loaders to load the data when navigating to the page and display an error if the data is not present.
The data loaders are especially useful for rendering the setup screen or displaying a specific message if there are no tooltips, etc.
The core of my router looks like the following code snippet.
const router = createBrowserRouter([
{
id: 'Login',
path: '/login',
element: <Login />,
},
{
id: 'Signup',
path: '/signup',
element: <Signup />,
},
{
id: 'AppShell',
path: '/',
element: <AppShell />,
errorElement: <PageNotFound />, // <-- 404 page
loader: appShellDataLoader, // <-- load user and its projects
children: [
{
id: 'Dashboard',
path: '/',
loader: dashboardDataLoader,
element: <DashboardPage />,
errorElement: <NoTooltips />, // <-- render a specific CTA if user don't have any tooltips
},
// More routes
],
},
])
With this setup, I can easily load the data when loading the route. For example, we should only display the app shell if the user exists and has a project. So my data loader function can look like the following code snippet. I added some inline comments for easier understanding.
export const appShellDataLoader = async () => {
// Load a current logged in user from Cognito.
// This line throws an error if a user is not logged in
const user = await Auth.currentAuthenticatedUser()
const username = user.username
if (!username) {
throw new Error('NO_USERNAME')
}
// Get the user and the project data from GraphQL
const response = await API.graphql<GraphQLQuery<IUserAndProjects>>(
graphqlOperation(getMeAndMyProjects)
)
const projects = response.data?.getUser.projects
// Throw an error if there are no users
if (!projects || projects.length === 0) {
throw new Error('NO_PROJECTS')
}
// Or return the data in a specific format if everything is there
return {
user: {
username: response.data?.getUser.username,
email: response.data?.getUser.email,
},
projects: response.data?.getUser.projects || [],
}
}
The only problem with this setup is that my router shows a 404 error if a user does not have any projects or is not logged in. To fix that, I need to render a custom error-handling component instead of the “PageNotFound” component.
In the “ErrorRoute” component, I can use the useRouteError function to get an error that occurred and display the correct flow depending on that error.
For example, if the user is not logged in, I’ll redirect it to the login page. If my GraphQL query does not return a user or if a user has no projects, I’ll show the setup page. Finally, I’ll show the “PageNotFound” component if the selected route does not exist.
This error-handling component can look like the following code snippet.
const ErrorComponent: React.FC = () => {
const [ username, setUsername ] = useState<string | null>(null)
const navigate = useNavigate()
// I should handle these errors better and always throw the same error type
const error = (useRouteError() as Error | string | IGraphQLAPIErrorObject)
useEffect(() => {
Auth.currentAuthenticatedUser().then(user => {
setUsername(user.username as string)
})
}, [])
if (error === 'The user is not authenticated') {
navigate('/login')
}
return (<>
{
// Yes, I know this is too complex and should be refactored
(
(error instanceof Error && error.message === 'NO_PROJECTS') ||
((error as IGraphQLAPIErrorObject).errors.length > 0 && (error as IGraphQLAPIErrorObject).errors[0].message === 'User not found')
) ? (
username ? (
<SetupPage username={username} />
) : null
) : (
<NotFound />
)
}
</>)
}
This component can look much more elegant and readable, but I’ll deal with it in the future.
Now that the data is fetched, I can read it in the component with the following code:
const { user, projects } = useLoaderData()
The useLoaderData
function return value has an unknown type, which makes the code a bit less readable because I need to declare the type somehow. I guess there’s a more elegant way to do that, but as SomeType
also works.
These loaders are very helpful for the pages that depend on tooltips. For example, there’s no point in rendering the Dashboard page if users do not have any tooltips. Instead, we can show a CTA that tells them to create their first tooltip. That page can look like the following screenshot:

Not great, not terrible. Let’s create a first tooltip!
Creating tooltips
The critical part of this page is the tooltip creation flow and its connection to the backend. However, let’s finish the UI first.
The tooltip creation should be a two-step process:
- In the first step, users should name their tooltip, explain the tooltip’s purpose, and optionally add a description or some tags (we’ll add this later).
- In the second step, Knowlo should use OpenAI to generate a tooltip content for the tooltip based on the uploaded knowledge base and a tooltip purpose explanation. In this step, users should be able to modify the tooltip and save it.
The simple “CreateTooltip” component should look similar to the following code snippet:
const CreateTooltip: React.FC<{}> = () => {
const [step, setStep] = useState(1)
const [step2Data, setStep2Data] = useState<INextStepParams | null>(null)
const goToStep = (step: number): void => {
if ([1, 2].includes(step)) {
setStep(step)
}
}
const nextStep = (params: INextStepParams): void => {
setStep2Data(params)
// Probably redundant
goToStep(2)
}
return (
<div>
{ step === 1 && (
<CreateTooltipStep1 nextStep={nextStep} />
)}
{ step === 2 && step2Data && (
<CreateTooltipStep2 createTooltip={console.log} {...step2Data} />
)}
</div>
)
}
I should add proper error handling to this page, but let’s first connect the flow.
Pasting the whole step 1 and step 2 code would make this article unreadable. Step 1 has a form that is just a standard form. The important piece is the onSubmit
function. Here it is:
const onSubmit = handleSubmit(async (data) => {
setLoading(true)
const result = await API.graphql<GraphQLQuery<IGetTooltipSuggestionResponse>>(graphqlOperation(getTooltipSuggestion, {
projectId,
description: data.tooltipExplanation,
}))
if (result.data) {
nextStep({
tooltipName: data.tooltipName,
description: data.tooltipExplanation,
answer: result.data.getTooltipSuggestion.answer,
articleTitle: result.data.getTooltipSuggestion.articleTitle,
articleUrl: result.data.getTooltipSuggestion.articleUrl,
})
// No need to turn off the loading because we are going to the next step
}
// We should add an error handling
setLoading(false)
})
So, what’s the getTooltipSuggestion
mutation?
When I started connecting this flow, I realized I did not have a way to get a tooltip suggestion. I can use the API to load it, but I can also use GraphQL to fetch that data. While writing these lines, I realized this should be a query, not a mutation, because it’s not storing the data anywhere. Thanks for the code review 🙂 I’ll change that.
I updated my GraphQL schema with a new mutation (that should be a query) and added a new GraphQL type. Here are the important changes in the GraphQL schema:
type Mutation {
# other mutations
getTooltipSuggestion(projectId: ID!, description: String!): TooltipSuggestion
}
type TooltipSuggestion {
answer: String!
articleTitle: String!
articleUrl: String!
}
The next step is a resolver. Can I use a JavaScript resolver here? Let’s analyze the flow first.
Getting a tooltip suggestion
The easiest way to understand the data flow is to draw a diagram. I love diagrams, so here’s an Excalidraw diagram of the tooltip suggestion flow:

In short:
- We need to fetch the project from the DynamoDB table to get its embeddings S3 location.
- Then we need to get the embeddings from an S3 bucket.
- Then we need to create embeddings from a tooltip explanation.
- After that, we need to find the closest match to these embeddings in the Helpdesk embeddings.
- Finally, we need to ask ChatGPT API to create a tooltip and validate its response before answering the query.
While we can create a pipeline resolver for this, it’s probably too complex. Instead, we can use a Lambda resolver.
With Lambda resolver, AppSync invokes a Lambda function that gathers the data and prepares the response. Let’s create it!
First, we must add a Lambda function and a Lambda resolver to the CDK stack.
A function can look like the following code snippet:
const getTooltipSuggestionFunction = new lambda.NodejsFunction(this, 'GetTooltipSuggestionFunction', {
entry: './lib/functions/get-tooltip-suggestion/lambda.ts',
handler: 'handler',
runtime: Runtime.NODEJS_18_X,
timeout: Duration.minutes(5),
environment: {
LOG_LEVEL: logLevelParameter.valueAsString,
NODE_OPTIONS: '--enable-source-maps',
OPEN_AI_API_KEY: openAiApiKey.stringValue,
TABLE_NAME: coreDbTable.tableName,
BUCKET_NAME: csvBucket.bucketName,
},
logRetention: environmentParameter.valueAsString === 'production' ? RetentionDays.INFINITE : RetentionDays.ONE_WEEK,
bundling: {
sourceMap: true,
}
})
It needs permissions to read the data from the database and to get the embeddings file. We’ll grant these permissions with the following code:
coreDbTable.grantReadData(getTooltipSuggestionFunction)
csvBucket.grantRead(getTooltipSuggestionFunction, 'embeddings/*')
Finally, we need to add a resolver and a data source. Here’s the code:
const tooltipSuggestionDataSource = new appsync.LambdaDataSource(this, 'TooltipSuggestionDataSource', {
api: graphQLApi,
lambdaFunction: getTooltipSuggestionFunction,
})
tooltipSuggestionDataSource.createResolver('GetTooltipSuggestionResolver', {
typeName: 'Mutation',
fieldName: 'getTooltipSuggestion',
})
Before deploying a stack, we need to write a Lambda function. The function code follows the same old format:
lib/functions/get-tooltip-suggestion/
' lambda.ts
' src
??? ' cosine-similarity.ts
??? ' main.ts
??? ' parser.ts
' types.ts
We keep the lambda.ts
file as simple as possible. It just loads dependencies. The parser function parses the event data, and the main.ts
file handles the business logic. The cosine-similarity.ts
file contains a function for calculating the cosine similarity of the embeddings. It’s in a separate file because we might want to add tests to it later.
Here’s the business logic code:
import { cosineSimilarity } from './cosine-similarity'
import { IGetTooltipSuggestionParams, IGetTooltipSuggestionResponse } from '../types'
export async function getTooltipSuggestion<T>({
event, parser, repositories, logger
}: IGetTooltipSuggestionParams<T>): Promise<IGetTooltipSuggestionResponse> {
const { projectId, description } = parser(event)
logger.debug('projectId', projectId)
logger.debug('description', description)
const { knowledgeBaseS3Path, knowledgeBaseUrl } = await repositories.dbRepository.getProject(projectId)
logger.debug('knowledgeBaseS3Path', knowledgeBaseS3Path)
logger.debug('knowledgeBaseUrl', knowledgeBaseUrl)
const helpdeskEmbeddingsRaw = await repositories.fileRepository.getFile(knowledgeBaseS3Path)
logger.debug('helpdeskEmbeddingsRaw', `${helpdeskEmbeddingsRaw.length}`)
const helpdeskEmbeddings = JSON.parse(helpdeskEmbeddingsRaw)
const questionEmbeddings = await repositories.aiRepository.createEmbeddings(description)
logger.debug('questionEmbeddings', questionEmbeddings)
const similarities = helpdeskEmbeddings.map((item: any, idx: number) => ({
idx: idx,
similarity: cosineSimilarity(questionEmbeddings.data[0].embedding, item.embedding.data[0].embedding),
}))
logger.debug('similarities', similarities)
const mostSimilarArticleIndex = similarities.reduce((maxIdx: number, curr: any, idx: number) => {
return curr.similarity > similarities[maxIdx].similarity ? idx : maxIdx;
}, 0)
const selectedArticle = helpdeskEmbeddings[mostSimilarArticleIndex]
logger.debug('selectedArticle', selectedArticle)
const generatedTooltip = await repositories.aiRepository.generateTooltip({
id: selectedArticle.id,
title: selectedArticle.title,
slug: selectedArticle.slug,
content: selectedArticle.content,
}, description)
logger.debug('generatedTooltip', JSON.stringify(generatedTooltip, null, 2))
return {
answer: generatedTooltip.answer,
articleUrl: `${knowledgeBaseUrl}/${selectedArticle.slug}`,
articleTitle: selectedArticle.title,
}
}
I need to add a better validation of the response and 2-3 retries if a prompt fails. This might be an issue because of the OpenAI API request duration, so it might be better to handle the retries from the front end later.
The cosine similarity function code looks like the following code snippet:
// Calculate cosine similarity between two vectors
export function cosineSimilarity(vec1: number[], vec2: number[]): number {
const dotProduct = vec1.reduce((sum, a, i) => sum + a * vec2[i], 0);
const magnitude = Math.sqrt(vec1.reduce((sum, val) => sum + val * val, 0)) * Math.sqrt(vec2.reduce((sum, val) => sum + val * val, 0));
return dotProduct / magnitude;
}
It’s the same code from the prototype version with TypeScript types (added by CofounderGPT, of course).
Finally, the OpenAI repository was extended with the following function:
async generateTooltip(context: IGenerateTooltipContext, question: string, model = 'gpt-3.5-turbo-16k') {
try {
const rawResponse = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
Authorization: `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
model,
messages: [
{
role: 'system',
content: `You are Knowlo, an assistant that generates the tooltip content based on the provided Helpdesk article and users' tooltip description. The goal of these tooltips is to be presented to the end users of the application, and they should give them the best explanations just in time for the selected feature so they can use the system effortlessly.
I'll pass the Helpdesk article(s) and a question. Create a tooltip content to answer the question based on the provided Helpdesk article only.
Please always reply with JSON in the following format:
{ "answer": "Your answer here" }
Where the "answer" key value is a short answer, up to 600 characters. The answer should be formed as a statement, not as a direct answer to the question. For example, if the question is "How to create a new project?", the answer should be "To create a new project, click on the 'New project' button in the top right corner of the screen."
In case you are not 100% sure that you can explain the feature user described (for example, if the explanation was not clear enough or if the Helpdesk article does not contain an answer to the described functionality), answer with JSON with the following format:
{ "error": true, "message": "Explain why the error occurred" }
Where the message contains a clear explanation of why the error occured.`
},
{
role: 'user',
content: `Return a JSON for the following Helpdesk article:
----
${JSON.stringify(context)}
----
To answer the following question:
----
${question}
----`
}
],
temperature: 0.5,
n: 1,
})
})
const response = await rawResponse.json()
this.logger.debug('OpenAI usage', JSON.stringify(response, null, 2))
const answer = response.choices[0].message.content.trim()
return JSON.parse(answer)
} catch (err) {
this.logger.error('OpenAiRepository.generateTooltip > Error', err as Error)
throw err
}
}
As you can see, the prompt is quite big, but we need to keep it big to get a decent result. I tried multiple models for this, and I got the best results with the “gpt-3.5-turbo-16k” model. The “gpt-4” model was too slow, and we have 30 seconds to respond to the GraphQL call (hard limit by an API Gateway). And the “gpt-3.5” model returned garbage. I’ll experiment a bit with models after the MVP is finalized.
With this resolver, I added a final piece of the puzzle. After deploying the CDK stack, I created and connected step 2 of the tooltip creation process. Here’s the result:
Step 1:

Step 2:

It needs a bit more work, but it’s good enough for now.
Scoreboard
Time spent today: 8h
Total time spent: 174h
Investment today: $0 USD
Total investment: $1,230.01 USD
Beta list subscribers: 79
Paying customers: 0
Revenue: $0
What’s next?
Now that we can create tooltips, I need CofounderGPT’s help to generate JavaScript code that users can embed in their apps. After that, we need to connect the analytics and tighten up the app a bit, and we’ll be ready for a few initial users!
Comments are closed