Day 19: Building an MVP – serverless GraphQL and auth

While Lav is preparing everything for launching an MPV version of Knowlo, I am focusing on finishing the MVP version with CofounderGPT. On Day 17, CofounderGPT helped me to create a database structure. Now we need to connect the API with the database and protect our app with an authentication flow.
Authentication with Cognito User Pools
We have already decided that we’ll use Amazon Cognito for user authentication. Amazon Cognito is a service provided by Amazon Web Services (AWS) that simplifies user authentication and authorization for mobile and web applications. It handles user registration, sign-in, and access control. It also supports logging in via third-party identity providers like Google, Facebook, and Amazon. In addition to these, you can use Cognito for federated identities from other identity providers or integrate it with AWS Identity and Access Management (IAM) to provide secure access to AWS resources.
Cognito has a generous free tier, where we don’t pay for our first 50.000 monthly active users (MAUs). After that, the additional 1000 MAUs cost $5.5.
Cognito sounds great until you start using it. Currently, it still has some issues, such as the quality of documentation, lack of flexibility, vague error messages, and some usage limits. However, its seamless integration with AWS services and pricing makes Cognito a good candidate for Knowlo.
As with many other AWS services, Cognito contains two different products under the hood: User Pools and Identity Pools.
Cognito User Pools is a service that we need for Knowlo. A Cognito User Pools is a user directory. With a User Pool, we can create and maintain a user directory and add sign-up and sign-in functionality to our application. User Pools store user profile information and can handle workflows like user registration and password recovery. User Pools can also allow users to sign in through third-party identity providers like Google, Facebook, and Amazon or using SAML identity solutions.
Identity Pools, or Federated Identities, provide temporary AWS credentials for users who are authenticated either through Cognito User Pools or other identity providers like Google, Facebook, or a SAML-based identity provider. These temporary credentials allow the application to access AWS resources, acting essentially as a bridge between your app users and AWS services. The permissions for each of these authenticated users are controlled through IAM roles associated with the identity pool. Identity Pools also support guest users, providing limited AWS access for users who aren’t authenticated. Identity Pools might be useful for our end users, as they are guest users with limited access to Knowlo resources.
Let’s start by creating a Cognito User Pool with AWS CDK. I’ll open the CDK stack file I already worked on (“backend/lib/knowlo-backend-stack.ts”) and add the following code at the top of the file where we imported other constructs:
import * as cognito from 'aws-cdk-lib/aws-cognito'
Then I’ll create a new Cognito User Pool by adding the following code:
const knowloUserPool = new cognito.UserPool(this, 'KnowloUserPool', {
selfSignUpEnabled: true,
passwordPolicy: {
minLength: 8,
requireLowercase: true,
requireDigits: true,
requireSymbols: true,
requireUppercase: false,
},
mfa: cognito.Mfa.OPTIONAL,
mfaSecondFactor: {
sms: false,
otp: true,
},
signInAliases: {
username: true,
email: true,
},
signInCaseSensitive: false,
accountRecovery: cognito.AccountRecovery.EMAIL_ONLY,
removalPolicy: environmentParameter.valueAsString === 'production' ? RemovalPolicy.RETAIN : RemovalPolicy.DESTROY,
deletionProtection: environmentParameter.valueAsString === 'production' ? true : false,
email: environmentParameter.valueAsString === 'production' ? cognito.UserPoolEmail.withSES({
fromEmail: '', // TODO: pass email via parameter later
}) : cognito.UserPoolEmail.withCognito(),
})
In the code above, we define the following:
selfSignUpEnabled true
: allows users to sign themselves up to the user pool.passwordPolicy
defines the password policy that will be enforced for user accounts in the user pool.mfa: cognito.Mfa.OPTIONAL
sets multi-factor authentication (MFA) to be optional for the user pool. Users will have the choice to set up MFA if they wish.mfaSecondFactor: {sms: false, otp: true}
sets the second factor for MFA to be a one-time password (OTP), and not SMS.signInAliases: {username: true, email: true}
allows users to sign in using either their username or email address.signInCaseSensitive false
: specifies that sign-in is not case-sensitive. Users can enter their username in any case.accountRecovery cognito.AccountRecovery.EMAIL_ONLY
: sets account recovery to be done through email only.removalPolicy
anddeletionProtection
policies are set based on the value ofenvironmentParameter
. If the environment is ‘production’, we’ll keep the User Pool.email
If the environment is ‘production’, the User Pool uses AWS Simple Email Service (SES) for sending emails. AfromEmail
is needed for sending emails which is currently an empty string (we’ll add it later). In non-production environments, it uses the built-in Cognito email capability.
In addition to a User Pool, we’ll need to create a User Pool client. A User Pool Client is an entity that has permission to call unauthenticated APIs, such as signing up, signing in, and handling forgotten passwords.
The User Pool client has various settings which we can configure, including things like which OAuth 2.0 flows we want to enable, the app’s callback URLs (for the OAuth 2.0 authorization endpoint), the sign-out URLs (for the sign-out endpoint), and the OAuth 2.0 scopes.
I’ll create a Cognito User Pool client with the following code snippet:
const knowloUserPoolClient = knowloUserPool.addClient('KnowloUserPoolClient', {
accessTokenValidity: Duration.minutes(15),
idTokenValidity: Duration.minutes(15),
refreshTokenValidity: Duration.days(365),
supportedIdentityProviders: [cognito.UserPoolClientIdentityProvider.COGNITO],
preventUserExistenceErrors: true,
oAuth: {
flows: {
authorizationCodeGrant: true,
implicitCodeGrant: true,
},
scopes: [cognito.OAuthScope.EMAIL, cognito.OAuthScope.OPENID, cognito.OAuthScope.PROFILE],
},
authFlows: {
userPassword: true,
},
enableTokenRevocation: true,
})
In this code snippet:
accessTokenValidity
andidTokenValidity
set the validity of the access and ID tokens to 15 minutes.refreshTokenValidity
sets the validity of the refresh token to approximately 1 year.supportedIdentityProviders: [cognito.UserPoolClientIdentityProvider.COGNITO]
sets the supported identity providers to Cognito only.preventUserExistenceErrors: true
helps prevent user enumeration by returning a generic error when a user is not found or the wrong password is entered.oAuth
block configures the OAuth settings for the client:flows
sets the OAuth flows that are allowed. In our case, both authorization code grant and implicit code grant are enabled.scopes
sets the OAuth scopes that are requested. In our case, it’s requesting access to the user’s email, OpenID, and profile information.
authFlows: {userPassword: true}
enables the Username and Password based authentication flow.enableTokenRevocation: true
enables revocation of tokens. When a token is revoked, it’s immediately invalidated and can’t be used for authorizing requests.
In addition to a Cognito User Pool and User Pool Client, I’ll immediately add a Cognito Identity Pool. It’s easy to delete it later.
Currently, AWS CDK requires installing an additional package for Cognito Identity Pools. I run the npm install @aws-cdk/aws-cognito-identitypool-alpha -S
to install the missing dependency.
Then, I imported it at the top of the Knowlo stack file:
import { IdentityPool, UserPoolAuthenticationProvider } from '@aws-cdk/aws-cognito-identitypool-alpha'
Then I added an Identity Pool with the following code:
const knowloIdentityPool = new IdentityPool(this, 'KnowloIdentityPool', {
allowClassicFlow: false,
allowUnauthenticatedIdentities: true,
authenticationProviders: {
userPools: [new UserPoolAuthenticationProvider({
userPool: knowloUserPool,
})],
},
})
In the code snippet above, we allow unauthenticated users to get temporary AWS credentials and set our User Pool as the only identity provider.
Finally, I exported the IDs we might need at the bottom of the Knowlo CDK stack with the following code and redeployed the application:
new CfnOutput(this, 'CognitoUserPoolId', {
value: knowloUserPool.userPoolId,
})
new CfnOutput(this, 'CognitoUserPoolClientId', {
value: knowloUserPoolClient.userPoolClientId,
})
new CfnOutput(this, 'CognitoIdentityPoolId', {
value: knowloIdentityPool.identityPoolId,
})
REST API or GraphQL?
On Day 12, CofounderGPT helped me to plan the architecture for the Knowlo app. The initial idea was to keep it simple, create a REST API for Knowlo, and then connect it from the front end. However, that requires writing more code, and I want to move as fast as possible. So, what are our alternatives?
I am a big fan of GraphQL. If you are not familiar with it, you can read this post. The main benefits of GraphQL are efficiency and fewer round trips (we can get all the data with a single request), and strong typings. But AppSync can be a bit more challenging to scale unless we use a managed service. Luckily, AWS has an excellent managed (serverless) GraphQL service called AWS AppSync.
AWS AppSync is a fully managed service that makes it easy to develop GraphQL APIs by handling the heavy lifting of securely connecting to data sources like AWS DynamoDB, AWS Lambda, or any HTTP data source. It supports real-time subscriptions as well as offline access to app data.
CofounderGPT already helped me with a DynamoDB table structure. I guess it can help me with the GraphQL schema, too. Let’s see!
If it can, that would speed us up significantly.
Creating a GraphQL schema with CofounderGPT
Slobodan: Can you use this final DynamoDB table design and create a GraphQL schema for the Knowlo app? We’ll add fields that are not part of DynamoDB indexes to all GraphQL types later.
CofounderGPT: Absolutely. Here’s the initial version of your GraphQL schema based on the final DynamoDB table design:
type User {
id: ID!
projects: [ProjectUser]
}
type Project {
id: ID!
users: [ProjectUser]
tooltips: [Tooltip]
# plus other fields, like knowledgeBase, etc.
}
type ProjectUser {
project: Project
user: User
# plus other fields, like user roles in the project, etc.
}
type Tooltip {
id: ID!
project: Project
interactions: [Interaction]
# plus other fields, like tooltip attributes, etc.
}
type Interaction {
id: ID!
tooltip: Tooltip
# interactionType: InteractionType
# plus other fields, like endUserId, timestamp, etc.
archived: Boolean
}
# enum InteractionType {
# VIEW
# QUESTION_ASKED
# TOOLTIP_UPVOTE
# TOOLTIP_DOWNVOTE
# TOOLTIP_BUTTON_CLICKED
# }
The commented parts represent the additions for the interaction types which are not currently part of your DynamoDB indexes but can be added later.
GraphQL can simplify complex queries on your DynamoDB data, but remember that you’ll need to implement resolvers to efficiently fetch the related data for your types. Please consider this schema as a starting point, you will likely need to adjust it as you develop your application.
As you add fields to your GraphQL types, make sure to update your resolvers and access control rules accordingly.
Slobodan: Ok, this is a good start. Thanks! Can you uncomment the InteractionType type?
Also, let’s add the following fields:
User:
- name
- image (URL to the profile image)
- createdAt (timestamp in ISO 8601 format)
Project:
- knowledgeBaseProvider (a provider of the knowledge base, for example, CrispChat, Intercom, or Zendesk)
- knowledgeBaseS3Path (a location in the S3 bucket where the knowledge base is stored)
description - createdAt
ProjectUser:
- role (OWNER, ADMIN, VIEWER)
- createdAt
- archived
Tooltip:
- name
- tooltipTitle (optional)
- tooltipText
- createdAt
- updatedAt
- archived
Interaction:
- endUserId
- createdAt
CofounderGPT: Certainly, here’s your updated GraphQL schema with additional fields:
type User {
id: ID!
name: String!
email: String!
image: String
createdAt: String!
projects: [ProjectUser]
}
type Project {
id: ID!
users: [ProjectUser]
tooltips: [Tooltip]
knowledgeBaseProvider: String
knowledgeBaseS3Path: String
description: String
createdAt: String!
}
type ProjectUser {
project: Project
user: User
role: UserRole
createdAt: String!
archived: Boolean
}
enum UserRole {
OWNER
ADMIN
VIEWER
}
type Tooltip {
id: ID!
project: Project
interactions: [Interaction]
name: String!
tooltipTitle: String
tooltipText: String!
createdAt: String!
updatedAt: String!
archived: Boolean
}
type Interaction {
id: ID!
tooltip: Tooltip
interactionType: InteractionType
endUserId: String!
createdAt: String!
archived: Boolean
}
enum InteractionType {
VIEW
QUESTION_ASKED
TOOLTIP_UPVOTE
TOOLTIP_DOWNVOTE
TOOLTIP_BUTTON_CLICKED
}
This schema now reflects the data model you shared, with types for User
, Project
, ProjectUser
, Tooltip
, and Interaction
. It includes all the fields you requested. Please note that the timestamps are represented as strings in ISO 8601 format. Also, roles are represented as an enumeration (UserRole
) with values OWNER
, ADMIN
, and VIEWER
.
Slobodan:
Excellent! I made a small mistake: the Project type should have the archived field, not the ProjectUser type. Let’s add some mutations now!
We’ll need the following mutations:
updateUser
– it should receive the user ID and params to updatecreateProject
updateProject
– it should receive the project ID and params to updatearchiveProject
addProjectUser
removeProjectUser
createTooltip
updateTooltip
deleteTooltip
createInteraction
CofounderGPT: No problem, here’s the updated schema with archived
moved to Project
and the new mutations:
type User {
id: ID!
name: String!
email: String!
image: String
createdAt: String!
projects: [ProjectUser]
}
type Project {
id: ID!
users: [ProjectUser]
tooltips: [Tooltip]
knowledgeBaseProvider: String
knowledgeBaseS3Path: String
description: String
createdAt: String!
archived: Boolean
}
type ProjectUser {
project: Project
user: User
role: UserRole
createdAt: String!
}
enum UserRole {
OWNER
ADMIN
VIEWER
}
type Tooltip {
id: ID!
project: Project
interactions: [Interaction]
name: String!
tooltipTitle: String
tooltipText: String!
createdAt: String!
updatedAt: String!
archived: Boolean
}
type Interaction {
id: ID!
tooltip: Tooltip
interactionType: InteractionType
endUserId: String!
createdAt: String!
archived: Boolean
}
enum InteractionType {
VIEW
QUESTION_ASKED
TOOLTIP_UPVOTE
TOOLTIP_DOWNVOTE
TOOLTIP_BUTTON_CLICKED
}
type Mutation {
updateUser(id: ID!, input: UserInput!): User
createProject(input: ProjectInput!): Project
updateProject(id: ID!, input: ProjectInput!): Project
archiveProject(id: ID!): Project
addProjectUser(projectId: ID!, userId: ID!, role: UserRole!): ProjectUser
removeProjectUser(projectId: ID!, userId: ID!): Boolean
createTooltip(input: TooltipInput!): Tooltip
updateTooltip(id: ID!, input: TooltipInput!): Tooltip
deleteTooltip(id: ID!): Boolean
createInteraction(input: InteractionInput!): Interaction
}
input UserInput {
name: String
email: String
image: String
}
input ProjectInput {
knowledgeBaseProvider: String
knowledgeBaseS3Path: String
description: String
}
input TooltipInput {
projectId: ID!
name: String
tooltipTitle: String
tooltipText: String
}
input InteractionInput {
tooltipId: ID!
interactionType: InteractionType
endUserId: String!
}
In this updated schema, the mutations allow the manipulation of each data type. Input types (UserInput
, ProjectInput
, TooltipInput
, InteractionInput
) are used to simplify the mutation parameters.
These mutations are quite straightforward and follow the typical CRUD operations. Just note that the removeProjectUser
and deleteTooltip
mutations return a Boolean indicating success, rather than the deleted object. The archiveProject
mutation only needs the project id to work, as it just sets the archived
field of the project to true.
Slobodan: Thanks! Let’s add some queries. Can you suggest the list of GraphQL queries we might need in the Knowlo application?
CofounderGPT:
Absolutely, let’s start by thinking about the typical operations and views that you will need in the Knowlo application:
getUser
: Query a specific user by their ID.getProject
: Query a specific project by its ID.getUserProjects
: Query all projects that a user is part of.getTooltip
: Query a specific tooltip by its ID.getProjectTooltips
: Query all tooltips that belong to a specific project.getInteraction
: Query a specific interaction by its ID.getTooltipInteractions
: Query all interactions that belong to a specific tooltip.getProjectInteractions
: Query all interactions that belong to a specific project.
Here is what these would look like in your GraphQL schema:
type Query {
getUser(id: ID!): User
getProject(id: ID!): Project
getUserProjects(userId: ID!): [Project]
getTooltip(id: ID!): Tooltip
getProjectTooltips(projectId: ID!): [Tooltip]
getInteraction(id: ID!): Interaction
getTooltipInteractions(tooltipId: ID!): [Interaction]
getProjectInteractions(projectId: ID!): [Interaction]
}
These are fairly standard queries that allow you to fetch your data in various ways, based on your different access patterns. You may need to add or adjust some queries based on specific requirements of your application.
Done! Thanks, CofounderGPT! Let’s convert this GraphQL schema to an API we can use.
Building a GraphQL API with AWS AppSync and CDK
As we saw in one of the previous articles in this blog, ChatGPT (and CofounderGPT) is not that good with AWS CDK version 2 because of the data it’s trained with. However, I still want to be able to write code a bit faster, so I tried using Copilot.
Let’s see if Copilot can help me to create an AppSync API for the GraphQL schema that CofounderGPT generated.
I imported the AWS AppSync construct with the following code at the top of the CDK stack file (“backend/lib/knowlo-backend-stack.ts”):
import * as appsync from 'aws-cdk-lib/aws-appsync'
Then I started typing the following code: const graphQLApi = new appsync
, and the magic happened!

After a few minor changes, I saved the following code:
const graphQLApi = new appsync.GraphqlApi(this, 'KnowloGraphQLApi', {
name: 'KnowloGraphQLApi',
schema: appsync.SchemaFile.fromAsset('graphql/schema.graphql'),
authorizationConfig: {
defaultAuthorization: {
authorizationType: appsync.AuthorizationType.USER_POOL,
userPoolConfig: {
userPool: knowloUserPool,
},
},
},
logConfig: {
fieldLogLevel: environmentParameter.valueAsString === 'production' ? appsync.FieldLogLevel.ERROR : appsync.FieldLogLevel.ALL,
},
})
Copilot also helped with the GraphQL API URL output at the bottom of the Knowlo CDK stack:
new CfnOutput(this, 'GraphQLApiUrl', {
value: graphQLApi.graphqlUrl,
})
Finally, I created the “graphql” folder in the root of the backend project and an empty “schema.graphql” file in it. Then I pasted the GraphQL schema that CofounderGPT created and added the following on the top:
## ROOT
schema {
query: Query
mutation: Mutation
}
I ran the following command npm run cdk deploy -- --parameters LogLevel=DEBUG
, and less than a minute later, the GraphQL API was deployed and ready!

Writing AppSync resolvers using TypeScript
Our AppSync API is ready but not connected to the database. Our final missing pieces are resolvers. There are multiple ways to write resolvers in AWS AppSync:
- Lambda resolvers – this is the best scenario for more complicated business logic or integration with some services that AWS AppSync does not directly support.
- Velocity templates (VTL) – I’d say that VTL templates are the legacy way to write resolvers. They still work and have an excellent set of utility tools, but VTL templates are hard to write and test.
- JavaScript resolvers – the new version of resolvers that aim to match (or even replace) VTL resolvers in the long term.
We’ll use JavaScript resolvers, as they are the best for our use case and are also relatively new.
Luckily, AppSync JavaScript resolvers now support TypeScript, and we just need to add an esbuild build step, as explained here. But let’s write one resolver first.
BeBefore writing a first resolver, I’ll create a “resolvers” folder in the “lib” folder of the backend project. In that folder, I’ll make a “get-user.ts” file. In addition to that, I’ll install the AppSync utils module from NPM with the following command: npm i @aws-appsync/utils
.
Here’s a simple “getUser” resolver:
import { Context } from '@aws-appsync/utils'
export function request(ctx: Context) {
const { id } = ctx.args
return {
operation: 'GetItem',
key: util.dynamodb.toMapValues({
id,
}),
consistentRead: true,
}
}
export function response(ctx: Context) {
const { id } = ctx.args
const { result } = ctx
if (!result) {
return util.error('User not found', JSON.stringify({ id }))
}
return result
}
It just reads the item from the DynamoDB table by ID.
Then I’ll create a DynamoDB data source in the Knowlo CDK stack with the following code:
const knowloTableDataSource = new appsync.DynamoDbDataSource(this, 'KnowloTableDataSource', {
api: graphQLApi,
name: 'KnowloTableDataSource',
description: 'KnowloTableDataSource',
serviceRole: coreTableRole,
table: coreDbTable,
})
And create a service role for this data source:
const coreTableRole = new iam.Role(this, 'CoreTableRole', {
assumedBy: new iam.ServicePrincipal('appsync.amazonaws.com'),
})
coreTableRole.addToPolicy(new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: [
'dynamodb:GetItem',
'dynamodb:PutItem',
'dynamodb:DeleteItem',
'dynamodb:Query',
],
resources: [
coreDbTable.tableArn,
`${coreDbTable.tableArn}/index/*`,
],
}))
Finally, I’ll create a resolver for the “getUser” GraphQL query with the following code:
graphQLApi.createResolver('GetUserResolver', {
typeName: 'Query',
fieldName: 'getUser',
code: appsync.AssetCode.fromAsset('build/resolvers/get-user.js'),
dataSource: knowloTableDataSource,
runtime: appsync.FunctionRuntime.JS_1_0_0,
})
Then I tried redeploying my template, and the following error occurred: Runtime not supported for UNIT resolver.
Unfortunately, JavaScript resolvers are only available for AppSync Pipeline resolvers at the moment. We can handle that, but it requires additional time, and this article is already too long, so we’ll leave for the next one.
Scoreboard
Time spent today: 8h
Total time spent: 129h
Investment today: $0 USD
Total investment: $1,029.54 USD
Paying customers: 0
Revenue: $0
What’s Next?
Our next step is to finish writing resolvers and finally start connecting the frontend application. I am confident that we can finish writing resolvers fast, especially with some help from Copilot.
Comments are closed