Day 23: Building an MVP – the Devil is in the Details

CofounderGPT helped me to connect the login and signup flow on Day 21. The progress we made looked promising, and the goal for today was to connect the UI with GraphQL queries and mutations. However, even when the big picture seems clear and promising, many small things can block you or make your progress slower than planned. The devil is always in the details.
The Signup Flow
On Day 21, we connected the signup. However, the ability to sign up and log in using email and password is far from enough to start using Knowlo. After users sign up and log in, they need to create a project and upload a CSV file with their knowledge base articles. We can redirect them to the dashboard page and let them figure everything out by themselves, or we can help them to set everything up.
Using many SaaS apps every day is already hard enough, so let’s try to make Knowlo usage as easy as possible (while still moving fast).
The signup flow should look similar to the following diagram.

However, users can close their browser at any point in this case. Once users log in, we need to check if they have a project set up, and if they don’t, we need to redirect them to the Setup screen again.

In the Setup screen, users should create a project, upload a CSV file with the content of their knowledge base, and start their free trial. The current plan is to give MVP users a 6-month free trial, but we’ll need to do a proper expenses analysis and prepare our initial pricing plan and an MVP trial duration.
The first step is to add a route in our React app. We can do that with the following code snippet (in the frontend/src/App.tsx
file):
const router = createBrowserRouter([
{
path: '/login',
element: <Login />,
},
{
path: '/signup',
element: <Signup />,
},
{
path: '/',
element: <AppShell />,
errorElement: <NotFound />,
children: [
{
path: '/setup/:username',
element: <SetupPage />,
},
// Other routes...
],
},
])
Why do we need a username in the path params? We don’t, but we want to show a nice avatar for our users before they can customize it. We can generate an avatar using the DiceBear open-source service. These avatars are free for personal and commercial use, and they have many excellent styles. We’ll use the “Bottts Neutral” avatar style. The Bottts Neutral style is based on the Bottts style by Pablo Stanley. Here’s the avatar for my username (“slobodan”):

The next step is to redirect a user to the setup page if they are signed up but still don’t have a project set up. We can do that with the following code snippet in the AppShell component (the frontend/src/containers/app-shell/index.tsx
file):
try {
const user = await Auth.currentAuthenticatedUser()
username = user.username
if (!username) {
throw new Error('No username')
}
const response = await API.graphql(graphqlOperation(getUserAndProjects, { username }))
setUsername(username)
} catch (err) {
if (err.errors.length > 0) {
return navigate(`/setup/${username}`)
}
return navigate('/login')
}
The code above is not perfect, but it’s good enough at the moment. We’ll refactor it later. We’ll check if the user is logged in, and if they are, we’ll try to load their projects. In the catch statement, we’ll assume that every GraphQL error means the user does not have a project set up and redirect them to the Setup page. There’s a chance that we’ll receive some other GraphQL error at this point, but it’s more an edge case than a common scenario.
We use the AWS Amplify npm module to send the GraphQL request. The query we are sending is simple, but we extracted it to a separate file (frontend/src/services/graphql/custom-graphql-queries.ts
) to make the code more readable. Here’s the getUserAndProjects
GraphQL query:
export const getUserAndProjects = `
query getUserAndProjects($username: String!) {
getUser {
username
email
projects {
project {
id
name
}
}
}
}
`
It’s simple: we load a username and an email address for the logged-in user and the IDs and names of their projects. If the query returns the result, that means that the user already set up the project. If not, we’ll redirect them to the Setup page.
If I load the Knowlo app now, this GraphQL query will fail because we do not have the resolver for its fields. So our next step is to write and deploy these resolvers.
Writing GraphQL Resolvers
CofounderGPT already helped me generate an AppSync JavaScript resolver on Day 21. It’s time to test its ability to write resolvers. I went to the same thread and asked CofounderGPT to help me again.
Before asking CofounderGPT, I wrote a resolver for the User type. Here’s the code:
import { AppSyncIdentityCognito, Context, DynamoDBGetItemRequest } from '@aws-appsync/utils'
export function request(ctx: Context): DynamoDBGetItemRequest {
const identity = ctx.identity as AppSyncIdentityCognito
const username = identity.username
return {
operation: 'GetItem',
key: util.dynamodb.toMapValues({
PK: `USER#${username}`,
SK: `METADATA#${username}`,
}),
consistentRead: true,
}
}
export function response(ctx: Context) {
const identity = ctx.identity as AppSyncIdentityCognito
const username = identity.username
const { result } = ctx
if (!result) {
return util.error('User not found', JSON.stringify({ username }))
}
return result
}
We’ll get the username from the Cognito identity so that users can see only their profiles. And then, we’ll create the GetItem DynamoDB operation and get the user by username.
In the response section, we’ll throw an error if the user does not exist in the database.
Wait, aren’t we using the user ID instead of the username in the GraphQL schema? Yes, that’s correct. But once I started connecting the application with GraphQL queries, I realized that we don’t really need the user ID field. The username is already a unique identifier, and it’s always available in the identity for GraphQL queries and mutations that use Cognito authorizer. Here’s the updated User type in the GraphQL schema:
type User {
username: String!
firstName: String!
lastName: String!
email: String!
image: String
createdAt: String!
projects: [ProjectUser]
}
I saved the GetUser query resolver in the backend/lib/resolvers/get-user.ts
file and added the following code to the CDK stack:
new JavaScriptResolver(this, 'GetUserResolver', {
api: graphQLApi,
typeName: 'Query',
fieldName: 'getUser',
dataSource: knowloTableDataSource,
entry: 'lib/resolvers/get-user.ts',
})
Then I asked CofounderGPT to help me out with the other resolvers.
Slobodan: Can you generate a resolver for the “projects” field in the User type (the ProjectUser type).
CofounderGPT: Sure, I can help with that. Based on your DynamoDB schema, we can use a Query to fetch all the projects related to a user. Here is the TypeScript code for the resolver:
import { util } from '@aws-appsync/utils';
export function request(ctx) {
// The User's ID is passed in as an argument to this resolver
const { source: { id } } = ctx;
return {
operation: 'Query',
query: {
expression: 'GSI1PK = :userId',
expressionValues: util.dynamodb.toMapValues({ ':userId': `PROJECTUSER#${id}` }),
},
index: 'GSI1',
};
}
export function response(ctx) {
// The Query operation returns a list of items
const items = ctx.result.items;
// Remove the 'GSI1PK' and 'GSI1SK' fields from each item and return the list
return items.map(item => {
const { GSI1PK, GSI1SK, ...rest } = item;
return rest;
});
}
This resolver fetches all the projects for a user by querying the GSI1
global secondary index with the PROJECTUSER#<user_id>
key.
The response function then transforms the DynamoDB items into GraphQL items by removing the GSI1PK
and GSI1SK
fields, which are not part of the GraphQL schema.
Please note that this resolver assumes that your GraphQL schema has a User
type with an id
field, and a projects
field that returns a list of ProjectUser
types. If your schema is different, please let me know and I can adjust the resolver accordingly.
The result looked promising. CofounderGPT realized that we needed to remove the GSI1PK
and GSI1SK
fields from the response, which was an intelligent decision. I asked it to generate a few more resolvers without deploying the first one.
Slobodan: Thanks, now I need a resolver for the project field in the ProjectUser type.
CofounderGPT: Sure, based on your DynamoDB design, the project
field in the ProjectUser
type can be resolved by querying the PROJECT#<project_id>
key in the main table. Here is the TypeScript code for the resolver:
import { util } from '@aws-appsync/utils';
export function request(ctx) {
// The Project's ID is part of the primary key in the ProjectUser type
const { source: { projectId } } = ctx;
return {
operation: 'GetItem',
key: util.dynamodb.toMapValues({ 'PK': `PROJECT#${projectId}`, 'SK': `METADATA#${projectId}` }),
};
}
export function response(ctx) {
// GetItem operation returns a single item
const item = ctx.result;
// Remove the 'PK' and 'SK' fields from the item and return the result
const { PK, SK, ...rest } = item;
return rest;
}
This resolver fetches the project related to a ProjectUser
by getting the item with the PROJECT#<project_id>
key from the main table.
The response function then transforms the DynamoDB item into a GraphQL item by removing the PK
and SK
fields, which are not part of the GraphQL schema.
Please note that this resolver assumes that your GraphQL schema has a ProjectUser
type with a projectId
field, and a project
field that returns a Project
type. If your schema is different, please let me know and I can adjust the resolver accordingly.
Slobodan: This is perfect, thanks. Can you just do the following modifications:
- Remove GSI1 and GSI2 PK and SK from the result
- Add types to the
ctx
arguments and everything else you can
CofounderGPT: Certainly, I can make these adjustments to the resolver. Here’s the TypeScript code for the resolver with the requested modifications:
import { util, AppSyncResolverContext } from '@aws-appsync/utils';
import { DynamoDBGetItemResponse, DynamoDBItem } from '@aws-appsync/dynamodb-utils';
interface ProjectUser {
projectId: string;
}
interface Project {
PK: string;
SK: string;
GSI1PK: string;
GSI1SK: string;
GSI2PK: string;
GSI2SK: string;
}
export function request(ctx: AppSyncResolverContext<ProjectUser>): DynamoDBGetItemResponse {
// The Project's ID is part of the primary key in the ProjectUser type
const { source: { projectId } } = ctx;
return {
operation: 'GetItem',
key: util.dynamodb.toMapValues({ 'PK': `PROJECT#${projectId}`, 'SK': `METADATA#${projectId}` }),
};
}
export function response(ctx: AppSyncResolverContext<void, DynamoDBItem>): any {
// GetItem operation returns a single item
const item: Project = ctx.result;
// Remove the 'PK', 'SK', 'GSI1PK', 'GSI1SK', 'GSI2PK', 'GSI2SK' fields from the item and return the result
const { PK, SK, GSI1PK, GSI1SK, GSI2PK, GSI2SK, ...rest } = item;
return rest;
}
In this code:
- I’ve defined
ProjectUser
andProject
interfaces to provide typing for the context and DynamoDB item. - The
request
function’s return type isDynamoDBGetItemResponse
, matching the structure expected for a DynamoDB GetItem operation. - The
response
function’s return type isany
because it depends on the actual attributes of yourProject
items, excluding the keys and GSIs. - I’ve added
GSI1PK
,GSI1SK
,GSI2PK
,GSI2SK
to the destructuring in theresponse
function, which removes these fields from the result.
Please note that the exact types may depend on your specific AWS AppSync and DynamoDB setup. If you’re using different type definitions or utilities, the types and imports in this code may need to be adjusted accordingly.
The last sentence was the first sign that CofounderGPT started forgetting the beginning of our conversation. But let’s test these resolvers first.
I saved the resolvers, but then I realized they are not 100% aligned with the GraphQL queries we want to run. I can’t say that these issues are CofounderGPT’s fault. I rushed a bit and asked the wrong questions, which led to some wrong answers.
I spent some time cleaning up the code and wrote the following resolvers:
- GetUserProjects (
backend/lib/resolvers/get-user-projects.ts
) - GetProjectFromUserProjects (
backend/lib/resolvers/get-project-from-project-user.ts
) – I know! The name is very creative, as always.
Here’s the code for the GetUserProjects resover:
import { Context, util } from '@aws-appsync/utils';
export function request(ctx: Context) {
// The User's ID is passed in as an argument to this resolver
const { source: { id } } = ctx;
return {
operation: 'Query',
query: {
expression: 'GSI1PK = :userId',
expressionValues: util.dynamodb.toMapValues({ ':userId': `PROJECTUSER#${id}` }),
},
index: 'GSI1',
};
}
export function response(ctx: Context) {
// The Query operation returns a list of items
const items = ctx.result.items;
// Remove the 'GSI1PK' and 'GSI1SK' fields from each item and return the list
return items.map((item: any) => {
const { GSI1PK, GSI1SK, GSI2PK, GSI2SK, PK, SK, ...rest } = item;
return rest;
});
}
And the code for the GetProjectFromUserProjects resolver:
import { util, Context, DynamoDBGetItemRequest, DynamoDBStringResult} from '@aws-appsync/utils'
interface ProjectUser {
projectId: string
}
interface Project {
PK: DynamoDBStringResult
SK: DynamoDBStringResult
GSI1PK: DynamoDBStringResult
GSI1SK: DynamoDBStringResult
GSI2PK: DynamoDBStringResult
GSI2SK: DynamoDBStringResult
}
export function request(ctx: Context<ProjectUser>): DynamoDBGetItemRequest {
// The Project's ID is part of the primary key in the ProjectUser type
const projectId = ctx.source.projectId || ctx.source.PK.replace('PROJECT#', '')
return {
operation: 'GetItem',
key: util.dynamodb.toMapValues({ 'PK': `PROJECT#${projectId}`, 'SK': `METADATA#${projectId}` }),
}
}
export function response(ctx: Context<void, Project>): any {
// GetItem operation returns a single item
const item: Project = ctx.result
// Remove the 'PK', 'SK', 'GSI1PK', 'GSI1SK', 'GSI2PK', 'GSI2SK' fields from the item and return the result
const { PK, SK, GSI1PK, GSI1SK, GSI2PK, GSI2SK, ...rest } = item
return rest
}
As you can see, I still reused some of the generated code. So we didn’t waste too much time. Here’s the code that connects the resolvers to GraphQL fields in the CDK stack:
new JavaScriptResolver(this, 'GetUserProjectsResolver', {
api: graphQLApi,
typeName: 'User',
fieldName: 'projects',
dataSource: knowloTableDataSource,
entry: 'lib/resolvers/get-user-projects.ts',
})
new JavaScriptResolver(this, 'GetProjectFromProjectUserResolver', {
api: graphQLApi,
typeName: 'ProjectUser',
fieldName: 'project',
dataSource: knowloTableDataSource,
entry: 'lib/resolvers/get-project-from-project-user.ts',
After a redeployment, everything worked. But then, I needed a way to update a user and create a project to test the whole flow. Let’s ask CofounderGPT for help one more time!
Slobodan: I updated the GraphQL schema a bit. Here’s a new one:
[THE FULL GRAPHQL SCHEMA PASTED IN A PROMPT]
Can you generate a resolver for the UpdateUser mutation for me?
CofounderGPT: Based on the provided GraphQL schema, the updateUser
mutation takes in a username
and an input
of type UserInput
, and returns a User
. The UserInput
type includes firstName
, lastName
, email
, and image
. The User
type includes username
, firstName
, lastName
, email
, image
, createdAt
, and projects
.
In AppSync, resolvers are associated with a data source, which is the place where the actual data lives. Resolvers are composed of a request mapping template and a response mapping template, which define the workflow of translating the GraphQL request into the language of the data source and then translating the response from the data source back into GraphQL.
Assuming that you’re using DynamoDB as your data source, the updateUser
mutation resolver might look something like this:
import { util } from '@aws-appsync/utils';
export function request(ctx) {
const { username, input } = ctx.args;
const { firstName, lastName, email, image } = input;
// Define the attributes to update
const updateExpressionParts = [];
const expressionAttributeValues = {};
if (firstName !== undefined) {
updateExpressionParts.push("firstName = :firstName");
expressionAttributeValues[":firstName"] = { "S": firstName };
}
if (lastName !== undefined) {
updateExpressionParts.push("lastName = :lastName");
expressionAttributeValues[":lastName"] = { "S": lastName };
}
if (email !== undefined) {
updateExpressionParts.push("email = :email");
expressionAttributeValues[":email"] = { "S": email };
}
if (image !== undefined) {
updateExpressionParts.push("image = :image");
expressionAttributeValues[":image"] = { "S": image };
}
const updateExpression = "SET " + updateExpressionParts.join(", ");
return {
operation: 'UpdateItem',
key: util.dynamodb.toMapValues({ username }),
updateExpression: updateExpression,
expressionAttributeValues: expressionAttributeValues,
returnValues: 'ALL_NEW'
};
}
export function response(ctx) {
return ctx.result;
}
This resolver will update a user’s firstName
, lastName
, email
, and image
in DynamoDB, using the username
as the key for the operation. If a field in the input is undefined, it will not be included in the update. The updated user data is then returned.
Please note that this resolver assumes that the username
is the primary key for the User table in DynamoDB, and that there is only one item (user) with a given username
. Additionally, this resolver does not handle error cases or unexpected responses from DynamoDB.
The above resolver does not account for the projects
field in the User
type. If this data is stored in a separate table or requires additional logic to fetch, you would need to use a pipeline resolver and add a function to fetch the project data after updating the user data?1?.
Lastly, the resolver also doesn’t handle the createdAt
field in the User
type. If this field should not be updated when the user is updated, you would need to add a condition expression to the UpdateItem
operation to prevent this field from being updated, similar to the conditional update example in the AWS DynamoDB documentation?2?.
Unfortunately, I was unable to find the exact documentation on how to use the UpdateItem
operation in an AWS AppSync resolver. The AWS AppSync documentation for PutItem
was available, but the ` [RESPONSE CUT OFF]
There were multiple issues with the code above. First, the GraphQL schema I pasted marked some required fields as optional, which made the code a bit more complicated. Second, the operation CofounderGPT used is UpdateItem, which sounds the best for the UpdateUser mutation, but this operation also handles the signup. And finally, the most important one, CofounderGPT forgot about our primary key schema for the DynamoDB table.
The final code I used looks like the following snippet:
import { util, Context } from '@aws-appsync/utils'
export function request(ctx: Context) {
const { username, input } = ctx.args
const { firstName, lastName, email, image } = input
return {
operation: 'PutItem',
key: util.dynamodb.toMapValues({ PK: `USER#${username}`, SK: `METADATA#${username}` }),
attributeValues: util.dynamodb.toMapValues({
username,
firstName,
lastName,
email,
image,
createdAt: util.time.nowISO8601()
}),
}
}
export function response(ctx: Context) {
return ctx.result
}
I tried one more time.
Slobodan: Can you now create the CreateProject mutation?
CofounderGPT: Here’s a draft of the CreateProject
mutation using the PutItem
operation in AWS DynamoDB via AWS AppSync. This mutation will create a new project with a projectId
(which acts as the key), name
, description
, and createdAt
fields. If a project with the given projectId
already exists, it will be overwritten.
import { util } from '@aws-appsync/utils';
export function request(ctx) {
const { projectId, name, description, createdAt } = ctx.args;
return {
operation: 'PutItem',
key: util.dynamodb.toMapValues({ projectId }),
attributeValues: util.dynamodb.toMapValues({
name,
description,
createdAt,
}),
};
}
export function response(ctx) {
return ctx.result;
}
In the request
function, ctx.args
contains the arguments passed to the mutation. These arguments are used to form the PutItem
request that will create the new project in DynamoDB.
The response
function simply returns the result of the PutItem
operation, which in this case will be the new project that was created.
However, the PutItem
operation in DynamoDB has a limitation: it does not return the entire newly created item. It only returns the values of the attributes that were explicitly defined in the PutItem
request. This means that if you want to return the entire new project item as the result of the CreateProject
mutation, you will have to perform a separate GetItem
operation after the PutItem
operation, to fetch the new project.
Note that, as a best practice, you should also add error handling code to manage any potential errors that might occur when interacting with DynamoDB. Additionally, validation code should be added to verify that the input arguments meet any required criteria, such as checking that the projectId
is not null or empty.
Regarding the conditional update or increment functionality that you mentioned, the PutItem
operation does not support these features. You would need to use the UpdateItem
operation in DynamoDB, which has more complex syntax and functionality, such as conditionally updating an attribute or incrementing a numeric attribute?1?. Unfortunately, I ran out of time before I could locate an example of using UpdateItem
with AWS AppSync.
I hope this helps! If you need additional information or have further questions, feel free to ask.
Again, the primary key schema was wrong. I tried providing the DynamoDB table structure again, but CofounderGPT lost the context.
I hit the limit with the number of changes I can do in a single thread and still keep the CofounderGPT focused. I should probably start a new thread, but I’ll do it next time because it’s faster for me to finish the remaining resolvers manually now.
The CreateProject resolver looks like the following code snippet:
import { util, Context } from '@aws-appsync/utils'
export function request(ctx: Context) {
const { name, description, knowledgeBaseProvider, knowledgeBaseS3Path } = ctx.args.input
const projectId = util.autoId()
return {
operation: 'PutItem',
key: util.dynamodb.toMapValues({
PK: `PROJECT#${projectId}`,
SK: `METADATA#${projectId}`,
}),
attributeValues: util.dynamodb.toMapValues({
id: projectId,
name: name,
description: description,
knowledgeBaseProvider: knowledgeBaseProvider,
knowledgeBaseS3Path: knowledgeBaseS3Path,
archived: false,
createdAt: util.time.nowISO8601(),
}),
}
}
export function response(ctx: Context) {
return ctx.result
}
And I also created a resolver for a mutation that assigns a project to a user. Here’s the code:
import { util, Context } from '@aws-appsync/utils'
export function request(ctx: Context) {
const { username, projectId } = ctx.args
return {
operation: 'PutItem',
key: util.dynamodb.toMapValues({
PK: `PROJECT#${projectId}`,
SK: `USER#${username}`,
GSI1PK: `PROJECTUSER#${username}`,
GSI1SK: `PROJECT#${projectId}`,
}),
attributeValues: util.dynamodb.toMapValues({
role: 'OWNER',
createdAt: util.time.nowISO8601(),
}),
};
}
export function response(ctx: Context) {
return ctx.result
}
Everything looked good after adding these resolvers in the CDK stack and redeployment of the stack. Let’s go back to the setup page.
Setup Page
It took me a while to create a layout for the setup page. I used TailwindUI, but connecting all these dots is not always simple because some things are missing.
The final result looks like the following screenshots:


It could be better, but this will be good enough for the MVP version.
When a user submits the form, we want to do the following:
- Validate the form.
- Show a spinner or an alert with progress.
- Get the upload URL for the CSV file.
- Upload URL.
- Poll the status URL until we get the validation that Knowlo created embeddings.
- Create a user and a project in the database.
- Assign a user to a project.
- Redirect the user to the dashboard page.
How hard can it be? These are the famous last words.
The devil is always in the details. This process was a lot more time-consuming and error-prone than I expected. It would take a lot of time to explain all issues we faced, so I’ll describe the most important ones only.
First, the upload URL API didn’t work. It was a simple CORS issue that we fixed quickly. But then the API returned an error because of the wrong permissions because the “getUploadUrlLambda” function was not permitted to create the S3 pre-signed URL for reading the file. In addition to that, we used the function for creating the upload pre-signed URL for the status, too.
After we fixed the first set of issues, we got another CORS error. This time from the S3 bucket. The fix was easy, just a few lines of CDK code, but that led us to the following error: the processing returned an error because the CSV file was invalid. After a trial and error process, we figured out that the “WebKitFormBoundary” marker caused it. As this StackOverflow answer explains, each item in a multipart message is separated by a boundary marker. Webkit-based browsers put “WebKitFormBoundary” in the name of that boundary.
The solution was simple, but it took some time. You can simply apply the following to remove the marker:
const handleFile = (e) => {
e.preventDefault()
const content = e.target.result
setSelectedFile(content)
}
const handleChangeFile = (file) => {
const fileData = new FileReader()
fileData.onloadend = handleFile
fileData.readAsText(file)
}
Thanks again, StackOverflow! It takes some time to get used to asking ChatGPT instead of using Google and StackOverflow.
Luckily, this was the last major obstacle. After that, we managed to connect the whole signup flow. We also added a small progress tracker once a user clicks the “Start free trial” button. Here’s the screenshot of the progress bar:

I tried creating embeddings of the whole Vacation Tracker knowledge base at least 50 times, and the good news is that it works, and it costs $0.5 in total!
Scoreboard
Although we set-up Facebook ads, no money was spent yet today on those ads.
Time spent today: 8h
Total time spent: 161h
Investment today: $0.5 USD (OpenAI Embeddings API cost)
Total investment: $1,142.04 USD
Beta list subscribers: 59
Paying customers: 0
Revenue: $0
What’s Next
The next step is to finish the tooltip creation process and make as many dashboard pages as possible. I hope we overcome our biggest obstacles and that the JavaScript for the tooltip is the last major milestone in the MVP creation process. But remember, the devil is always in the details.
Comments are closed