Skip to main content

Langchain + Supabase Vector Filtering for RAG Applications

· 11 min read
Umut YILDIRIM
Fullstack Developer

Supabase, acclaimed as an open-source alternative to Firebase, offers a comprehensive toolkit including database, authentication, and storage solutions. Excelling in crafting Minimum Viable Products (MVPs) and hackathon creations, it's particularly adept for building RAG (Retrieval-Augmented Generation) applications. In this guide, I'll demonstrate how to leverage Supabase to enhance your RAG application's efficiency.

Here's the workflow: Whenever a user uploads a document, we store its vector representation in Supabase. This vector acts as a unique fingerprint of the document's content. Later, when a query is raised, Supabase springs into action, searching for vectors that closely match the query. This search process efficiently retrieves the most relevant documents.

An added layer of personalization ensures that users interact only with their content. We implement a filter to display documents created by the user, providing a tailored and secure user experience. This approach not only streamlines the search for relevant information but also maintains user data privacy and relevance.

If you want to see the final product, check out the demo. The source code is available on GitHub. We will build this application from scratch in another tutorial. However, for now, let's focus on the Supabase vector filtering.

info

In this tutorial, we'll use the Langchain JavaScript Client to interact with the database. All of the examples are written in JavaScript.

How Langchain Works?

LangChain is a framework for developing applications powered by language models. It enables applications that:

  • Are context-aware: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)
  • Reason: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)

Langchain Introduction

How to setup database?

In this tutorial we will mainly focus on Supabase PostgreSQL database. You can use any other database as well. However, you need to make sure that your database supports vector filtering. You can check the Langchain documentation for more information.

Start by creating a new project on Supabase. You can use the free plan for this tutorial. After creating your project, navigate to SQL Editor and create a new table called documents. You can use the following SQL query to create the table.

-- Enable the pgvector extension to work with embedding vectors
create extension vector;

-- Create a table to store your documents
create table documents (
id bigserial primary key,
content text, -- corresponds to Document.pageContent
metadata jsonb, -- corresponds to Document.metadata
embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed
);

-- Create a function to search for documents
create function match_documents (
query_embedding vector(1536),
match_count int DEFAULT null,
filter jsonb DEFAULT '{}'
) returns table (
id bigint,
content text,
metadata jsonb,
embedding jsonb,
similarity float
)
language plpgsql
as $$
#variable_conflict use_column
begin
return query
select
id,
content,
metadata,
(embedding::text)::jsonb as embedding,
1 - (documents.embedding <=> query_embedding) as similarity
from documents
where metadata @> filter
order by documents.embedding <=> query_embedding
limit match_count;
end;
$$;

First step is done 🎉.

How to add data to database?

Now that we have our database ready, we can start adding data to it. Since we need to get relevant data for our queries, we need to add relevant metadata to our documents. Your SQL row should look like this;

Supabase Table Editor

Did you catch the brandId field? We will use this field to filter the documents. We will only show the documents that are created by the user. This way, we can provide a personalized experience to our users.

But how do we do that in our application? Let's take a look at the code. Keep in mind I'm using Next.js for this tutorial. You can use any other framework as well.

import { NextRequest, NextResponse } from "next/server";
import { RecursiveCharacterTextSplitter, CharacterTextSplitter } from "langchain/text_splitter";

import { createClient } from "@supabase/supabase-js";
import { SupabaseVectorStore } from "langchain/vectorstores/supabase";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { auth } from '@clerk/nextjs';

export const runtime = "edge";

async function handleExtension(extension: string, content: string, brand_id: string, client: any) {
let splitter;

if (extension === "txt") {
splitter = new CharacterTextSplitter({
separator: " ",
chunkSize: 256,
chunkOverlap: 20,
});
} else {
const language = extension === "md" ? "markdown" : "html";
splitter = RecursiveCharacterTextSplitter.fromLanguage(language, {
chunkSize: 256,
chunkOverlap: 20,
});
}

const splitDocuments = await splitter.createDocuments(
[content],
[{ brand_id: brand_id }],
);

const vectorstore = await SupabaseVectorStore.fromDocuments(
splitDocuments,
new OpenAIEmbeddings(
{
openAIApiKey: process.env.NEXT_SECRET_OPENAI_API_KEY!,
configuration: {
baseURL: "https://gateway.ai.cloudflare.com/v1/********/********/openai",
},
}
),
{
client,
tableName: "documents",
queryName: "match_documents",
}
);
}

export async function POST(req: NextRequest) {
const { userId, getToken } = auth();
if(!userId){
return NextResponse.json({ error: "Not logged in." }, { status: 403 });
}
const token = await getToken({template: "supabase"});
const body = await req.json();
const {name, content, brand_id, knowledge_id} = body;

// Get file extension
const extension = name.split(".").pop();

// Accept these file types
// Markdown, Text, HTML
if (!["md", "txt", "html"].includes(extension)) {
return NextResponse.json(
{
error: [
"File type not supported.",
"Please upload a markdown, text, or html file.",
].join("\n"),
},
{ status: 403 },
);
}

try {
const client = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_KEY!,
{
global: {
headers: {
"Authorization": `Bearer ${token}`,
},
}
}
);

await handleExtension(extension, content, brand_id, client);
return NextResponse.json({ ok: true }, { status: 200 });
} catch (e: any) {
return NextResponse.json({ error: e.message }, { status: 500 });
}
}

Now step by step analyze the code using pseudo code.

  1. When user adds a new document, it calls a POST API endpoint. We call it /api/ingest/ in our application.
  2. Ingest API endpoint gets the user's token from Clerk and userId user sent when they called the request.
  3. We make sure that the user is logged in. If not, we return an error.
  4. We get the file extension from the file name. We only accept md, txt, and html files. So we make sure that the file extension is one of them. If not, we return an error.
  5. We create a new Supabase client using the token we got from Clerk.
  6. We call the handleExtension function with the file extension, content, brand_id, and client(supabase).
  7. We split the content into chunks and create documents using the CharacterTextSplitter class.
  8. We add brand_id to the metadata of the document. This way, we can filter the documents later.
  9. We call the SupabaseVectorStore.fromDocuments function to add the documents to the database. This function will add the vector representation of the document to the database.
  10. We return a success message to the user.

We split our data into chunks because we need to make sure that the vector representation of the document is not too big. If it's too big, it will make our KNN search more harder. So we split the document into chunks and store the vector representation of each chunk in the database. After that we converted content to vector representation using OpenAI Embeddings. You can use any other embedding as well. You can check the Langchain documentation for more information.

You might realized that we are using brand_id instead of user_id. This is because we want to make sure that the other users can see files and embeddings created by other users in same organization. If you want to make sure that users can only see their own files, you can use user_id instead of brand_id. You can also use both of them if you want to.

How to query the database?

Now that we have some context in our database we can start querying it to get relevant data. While Langchain requests this data from the database, all searching and filtering is done on Supabase side using PostgreSQL functions. Now let's take a look at the code and then analyze it.

info

In this example I used Agent's API to query the database. This will allow GPT4 to decide what to search for. You can also use retrieval to force context search but I don't recomend it since it will make your database slower.

import { NextRequest, NextResponse } from "next/server";
import { Message as VercelChatMessage, StreamingTextResponse } from "ai";

import { createClient } from "@supabase/supabase-js";

import { ChatOpenAI } from "langchain/chat_models/openai";
import { SupabaseVectorStore } from "langchain/vectorstores/supabase";
import { AIMessage, ChatMessage, HumanMessage } from "langchain/schema";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import {
createRetrieverTool,
OpenAIAgentTokenBufferMemory,
} from "langchain/agents/toolkits";
import { ChatMessageHistory } from "langchain/memory";
import { initializeAgentExecutorWithOptions } from "langchain/agents";

export const runtime = "edge";

const convertVercelMessageToLangChainMessage = (message: VercelChatMessage) => {
if (message.role === "user") {
return new HumanMessage(message.content);
} else if (message.role === "assistant") {
return new AIMessage(message.content);
} else {
return new ChatMessage(message.content, message.role);
}
};

const TEMPLATE = `You are an helpful assistant named "MarkAI". If you don't know how to answer a question, use the available tools to look up relevant information.`;

export async function POST(req: NextRequest) {
try {
const body = await req.json();
const brand_id = body.brand_id;
// Check brand id for validation
if (!brand_id) {
return NextResponse.json({ error: "brand_id is either empty or wrong." }, { status: 400 });
}
const messages = (body.messages ?? []).filter(
(message: VercelChatMessage) =>
message.role === "user" || message.role === "assistant",
);
const returnIntermediateSteps = body.show_intermediate_steps;
const previousMessages = messages.slice(0, -1);
const currentMessageContent = messages[messages.length - 1].content;

const model = new ChatOpenAI({
openAIApiKey: process.env.NEXT_SECRET_OPENAI_API_KEY!,
modelName: "gpt-3.5-turbo",
// This was used so I could track usage of the model in Cloudflare Dashboard
// for more info: https://developers.cloudflare.com/ai-gateway/
configuration: {
baseURL: "https://gateway.ai.cloudflare.com/v1/**************/*******/openai",
},
});

const client = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_KEY!,
);
const vectorstore = new SupabaseVectorStore(new OpenAIEmbeddings(
{
openAIApiKey: process.env.NEXT_SECRET_OPENAI_API_KEY!,
}
), {
client,
tableName: "documents",
queryName: "match_documents",
filter:{
"brand_id": brand_id
}
});

const chatHistory = new ChatMessageHistory(
previousMessages.map(convertVercelMessageToLangChainMessage),
);

const memory = new OpenAIAgentTokenBufferMemory({
llm: model,
memoryKey: "chat_history",
outputKey: "output",
chatHistory,
});

const retriever = vectorstore.asRetriever();

const tool = createRetrieverTool(retriever, {
name: "search_latest_knowledge",
description: "Searches and returns up-to-date general information.",
});

const executor = await initializeAgentExecutorWithOptions([tool], model, {
agentType: "openai-functions",
memory,
returnIntermediateSteps: true,
verbose: true,
agentArgs: {
prefix: TEMPLATE,
},
});

const result = await executor.call({
input: currentMessageContent,
});

if (returnIntermediateSteps) {
return NextResponse.json(
{ output: result.output, intermediate_steps: result.intermediateSteps },
{ status: 200 },
);
} else {
// Agent executors don't support streaming responses (yet!), so stream back the complete response one
// character at a time to simluate it.
const textEncoder = new TextEncoder();
const fakeStream = new ReadableStream({
async start(controller) {
for (const character of result.output) {
controller.enqueue(textEncoder.encode(character));
await new Promise((resolve) => setTimeout(resolve, 20));
}
controller.close();
},
});

return new StreamingTextResponse(fakeStream);
}
} catch (e: any) {
return NextResponse.json({ error: e.message }, { status: 500 });
}
}

Now step by step analyze the code using pseudo code;

  1. Import Dependencies: Import necessary modules from Next.js, ai, @supabase/supabase-js, and langchain.

  2. Declare Runtime Environment: Define runtime as "edge," indicating the environment where the code will run.

  3. Convert Function: convertVercelMessageToLangChainMessage converts messages from VercelChatMessage format to LangChain's message format based on the role (user, assistant, or other).

  4. Define Assistant Template: Set TEMPLATE as a prompt for the AI assistant.

  5. POST Function:

    • Initialize: Define an asynchronous POST function for handling HTTP POST requests.
    • Parse Request Body: Extract brand_id and messages from the request body.
    • Validate Brand ID: Return an error response if brand_id is missing or invalid.
    • Filter Messages: Only keep messages from users or assistants.
    • Extract Content: Get the content of the current message and previous messages.
    • Setup AI Model: Initialize ChatOpenAI with GPT-3.5 model and configuration for Cloudflare AI Gateway.
    • Supabase Client: Create a Supabase client for database interactions.
    • Vector Store: Initialize SupabaseVectorStore with OpenAI embeddings, Supabase client, and table configuration.
    • Chat History: Create a chat history object from previous messages.
    • Memory: Setup OpenAIAgentTokenBufferMemory using the model and chat history.
    • Retriever Tool: Initialize a retriever tool for fetching up-to-date information.
    • Executor: Prepare executor with tools, model, memory, and other configurations.
    • Execute AI Model: Call the executor with the current message and capture the result.
    • Return Response: If returnIntermediateSteps is true, return output and intermediate steps; otherwise, stream the response character by character.
    • Error Handling: Catch and return errors in a JSON response.
  6. Runtime Configuration: The code configures and executes in an edge environment, suitable for high-performance and low-latency applications.

The code essentially sets up a sophisticated chatbot that can handle user queries by integrating various AI and database technologies. It supports real-time interactions and can provide detailed responses, including intermediate steps of the AI's reasoning process.

Conclusion

I trust this guide clarified the process of integrating Supabase vector filtering with Langchain, a challenge that took me two days to overcome. My aim is to simplify your experience. Should you have inquiries, please don't hesitate to contact me at [email protected] or raise an issue on the GitHub repository of my demo app here.

Add ChatGPT to your portfolio

· 6 min read
Umut YILDIRIM
Fullstack Developer

ChatGPT and similar large language models (LLMs) are currently at the forefront of every developer's mind, emerging as the latest trend in 'Silicon Valley'. The hype is real, and the possibilities are endless. But how do you get started? In this article, I will show you how to create a simple chatbot using ChatGPT and Voiceflow. I will also show you how to add it to your portfolio using Docusaurus.

Let's understand the basics first

Before delving into the creation of a ChatGPT-based chatbot, it's essential to grasp the fundamental principles underlying ChatGPT. Developed by OpenAI, ChatGPT stands as a prominent example of a large language model (LLM), distinguished by its proficiency in producing text that closely resembles human writing. This capability stems from its design as a transformer model, a type of architecture that has significantly advanced the field of natural language processing (NLP).

ChatGPT trying its best

The core strength of ChatGPT lies in its extensive training on a diverse array of text sources. This comprehensive dataset equips the model with a broad understanding, enabling it to engage in and respond to a wide spectrum of topics with a natural, conversational tone.

ChatGPT Inaccurate

However, the reliance on pre-existing data also introduces a limitation. ChatGPT's knowledge is confined to its training material, meaning it lacks the ability to generate information beyond its training scope. This is where innovative technologies like Retrieval-Augmented Generation (RAG) play a pivotal role. RAG merges the transformer model's capabilities with a search engine's prowess. This combination allows ChatGPT to not only generate responses based on its internal knowledge but also to pull in and utilize relevant, up-to-date information from external sources, such as the internet or specialized vector databases. This enhancement significantly broadens the model's applicability and accuracy, making it a more dynamic tool in the realm of NLP.

ChatGPT Retrival

Let's build a chatbot

Now that we have a basic understanding of ChatGPT, let's build a chatbot using Voiceflow. Voiceflow is a no-code platform that allows you to create voice and chat-based applications. It is a great tool for beginners and experienced developers alike, as it provides a simple, intuitive interface that enables you to build a functional chatbot in a matter of minutes. It also offers 'Knowledge Base' integration, which is essential for our chatbot project.

Create a new assistant

First, create a new assistant by clicking on the 'Create Assistant' button on the top right corner of the screen. Then, name your chatbot and select the 'Chat' option and don't forget to select your preferred language. Finally, click on the 'Create Assistant' button to create your assistant.

Voiceflow Creator

Voiceflow Creator Filles

Congratulations! You have successfully created your first assistant. Now, let's move on to the next step and add some knowledge to your assistant.

Change response type to 'Knowledge Base'

Since we want a chatbot that anwsers questions based on your personal knowledge base, we need to change the response type to 'Knowledge Base'. To do that, click on the first block on 'AI response output' and select the 'Knowledge Base' option. Finally, click on the 'Save' button to save your changes.

Voiceflow Knowledge Base

Update

Voiceflow has changed the way your knowledge base is used. Instead of Retrieval method they are now using Agents to grap relavent information from your knowledge base. This is a great improvement and makes your chatbot more accurate.

Voiceflow Knowledge Base

Well now your chatbot is ready to answer questions but it doesn't have any knowledge yet. Let's add some knowledge to your assistant.

Add knowledge to your assistant

To add knowledge to your assistant you need to prepare text, PDF, DOCX or your existing website. After that, you need to upload it to Voiceflow. Once you have uploaded your knowledge, you can start adding it to your assistant.

Voiceflow Knowledge Base

To do that, click on the 'Add Knowledge' button on the top right corner of the screen. Then, select the 'Upload' option and upload your knowledge. Finally, click on the 'Add Knowledge' button to add your knowledge to your assistant.

Voiceflow Knowledge Base Filed

Congratulations! You have successfully added knowledge to your assistant. Now, let's move on to the next step and test your assistant.

Test your assistant

WOW in less then 10 minutes you have created your first chatbot. Now, let's test your assistant. To do that, click on the 'Run' button on the top right corner of the screen or simply press 'R' button on your keyboard. Then, click on the 'Run Test' button to start testing your assistant.

Voiceflow Chatbt Test First

Let's start asking some easy questions. For example, ask your chatbot 'Who is your name?' or 'What is your name's current employer?'. If you have added your personal information to your knowledge base, your chatbot should be able to answer your questions.

Voiceflow First Test

If you don't see any answers you are either missing some information in your knowledge base or you have not added your knowledge base to your assistant. If you have added your knowledge base to your assistant, you can check if you have added your knowledge base to your assistant by clicking on the 'Knowledge Base' button on the top right corner of the screen.

Add your chatbot to your portfolio

If your portfolio is HTML based, you can add your chatbot to your portfolio by adding the following code to your HTML file.

<script type="text/javascript">
(function (d, t) {
var v = d.createElement(t),
s = d.getElementsByTagName(t)[0];
v.onload = function () {
window.voiceflow.chat.load({
verify: {projectID: 'your project id'},
url: 'https://general-runtime.voiceflow.com',
versionID: 'production',
});
};
v.src = 'https://cdn.voiceflow.com/widget/bundle.mjs';
v.type = 'text/javascript';
s.parentNode.insertBefore(v, s);
})(document, 'script');
</script>

You can also costumize your chatbot by checking out integrations tab on Voiceflow.

Voiceflow Integrations

Add your chatbot to your Docuaurus portfolio

If your portfolio is based on Docuaurus, you can add your chatbot to your portfolio by following these steps.

Create a new file called voiceflow.js in your src/theme folder and add the following code to it.

import ExecutionEnvironment from '@docusaurus/ExecutionEnvironment';

if (ExecutionEnvironment.canUseDOM) {
(function (d, t) {
var v = d.createElement(t),
s = d.getElementsByTagName(t)[0];
v.onload = function () {
window.voiceflow.chat.load({
verify: {projectID: '64bad5417ef5eb00077b0c2d'},
url: 'https://general-runtime.voiceflow.com',
versionID: 'production',
});
};
v.src = 'https://cdn.voiceflow.com/widget/bundle.mjs';
v.type = 'text/javascript';
s.parentNode.insertBefore(v, s);
})(document, 'script');
}

Then, add the following code to your docusaurus.config.js file.

  clientModules: [require.resolve('./src/theme/voiceflow.js')],

and you are done. Now, you can start your local server by running yarn start and test your chatbot.

Voiceflow Docusaurus

Conclusion

This guide demonstrated how to enhance your tech portfolio by integrating ChatGPT into a chatbot project, using Voiceflow and Docusaurus. It's a testament to the ease of using advanced AI in modern web development, showcasing your skills in navigating emerging technologies. This project is not just a technical achievement, but a step towards future innovations in AI. Keep exploring and embracing new tech frontiers!

How I use my Raspberry Pi?

· 9 min read
Umut YILDIRIM
Fullstack Developer

Raspberry Pi, come out of the closet! It's time for you to shine! I'll show you how I give my Pi a good workout and I'll teach you how to do the same in this post. Let's get physical (with technology)!

Cloudflare Tunnel (Optional)

Cloudflare Tunnel allows you to access your Raspberry Pi without publicly available IP address. So let's SSH to our Pi and install cloudflared.

You don't need to do this! But I highly suggest using Cloudflare Tunnel this will make your Raspberry Pi available to internet so you can access your Pi via internet.

Install Required Softwares

  1. Our first task is to perform an update of the package list as well as upgrade any out-of-date packages. You can perform both of these tasks using the following command in the terminal.
sudo apt update
sudo apt upgrade
  1. Once the update completes, we must ensure we have both the “curl” and “lsb-release” packages. Install both of these packages by using the command below in the terminal.
sudo apt install curl lsb-release
  1. With all the required packages in place, we can finally grab the GPG key for the Cloudflared repository and store it on our Raspberry Pi. A GPG key is crucial to verify the packages we are installing are valid and belong to the repository.
curl -L https://pkg.cloudflare.com/cloudflare-main.gpg | sudo tee /usr/share/keyrings/cloudflare-archive-keyring.gpg >/dev/null
  1. With the GPG key saved into our keyrings folder, our next step is to add the Cloudflared repository to our Raspberry Pi. You can add
echo "deb [signed-by=/usr/share/keyrings/cloudflare-archive-keyring.gpg] https://pkg.cloudflare.com/cloudflared $(lsb_release -cs) main" | sudo tee  /etc/apt/sources.list.d/cloudflared.list
  1. As we have made changes to the available repositories, we will need to perform another update of the package list cache. You can update this cache by using the following command within the terminal.
sudo apt update

Installing Cloudfared and setting up Tunnel to the Raspberry Pi

With the repository added, we can now proceed to install the Cloudflared package to our Raspberry Pi.

To install this package, you will want to run the following command.

sudo apt install cloudflared

Now that we have prepared our Raspberry Pi, we can set up the Cloudflare tunnel. Now let's go to Cloudflare Zero Trust dashboard and navigate to Access-> Tunnels then click on Create a tunnel. Name our tunnel to something meaningful. Cloudflare will create a new tunnel and will show you instructions on the page.

Cloudflared Zero Trust Tunnel Setup

Since we already installed cloudflared you need to copy the text on the right and paste it on SSH. Cloudflare dashboard will show Connectors at the bottom of setup page. Once your Raspberry Pi finish the code you pasted you will see your device there then you can continue your setup. You will see this page

Cloudflare Zero Trust Tunnel Route Setup

So let's make our ssh browser based. Follow these instructions;

  • Subdomain: ssh
  • Domain: Your choice you can get free domain via Freenom.
  • Path: Empty
  • Type: SSH
  • URL: localhost:22 And SAVE! Congratulations, you have successfully created your Cloudflare Tunnel. Now add this this tunnel to Applications and your are good to go!

Docker

Let's install Docker on our Raspberry Pi. Docker is a containerization platform that allows you to run applications in isolated containers. This is a great way to run applications on your Raspberry Pi without having to worry about dependencies and other issues that can arise when running multiple applications on the same device.

Install Required Softwares

  1. Our first task is to perform an update of the package list as well as upgrade any out-of-date packages. You can perform both of these tasks using the following command in the terminal.
sudo apt update
sudo apt upgrade
  1. With our Raspberry Pi entirely up to date, we can now go ahead and install Docker to the Raspberry Pi.

Luckily for us, Docker has made this process incredibly quick and straightforward by providing a bash script that installs everything for you.

You can download and run the official Docker setup script by running the following command.

curl -sSL https://get.docker.com | sh

This command will pipe the script directly into the command line. Typically it would be best if you didn’t do this; however, Docker is a trusted source.

Setting up the Pi user for Docker

We need to make a slight adjustment to our pi user before we can start using Docker without issues. This is to do with the way that the Linux permission system works with Docker.

  1. Once Docker has finished installing to the Pi, there are a couple more things we need to do. For another user to be able to interact with Docker, it needs to be added to the docker group. So our next step is to add our pi user to the docker group by using the command below.
sudo usermod -aG docker pi
  1. With the pi user added to the docker group, we can now log out of the Raspberry Pi and log back in again. This will ensure that the changes we have made to the pi user are applied.
logout
  1. Once you have logged back in, you can verify that the pi user has been added to the docker group by running the following command.
groups

You should see the docker group listed in the output. If you do not see the docker group listed, you will need to log out and log back in again. Once you have verified that the pi user has been added to the docker group, we can move on to the next step.

Increasing the Swap File Size

The Raspberry Pi is a great little device, but it does have one major drawback. It only has a small amount of RAM. This can be a problem when running Docker containers as they can use a lot of RAM. To get around this, we can increase the size of the swap file.

  1. Before we can increase our Raspberry Pi’s swap file, we must first temporarily stop it. The swap file cannot be in use while we increase it. To stop the operating system from using the current swap file, run the following command.
sudo dphys-swapfile swapoff
  1. With the swap file stopped, we can now increase the size of the swap file. To do this, we will need to edit the /etc/dphys-swapfile file. You can edit this file by running the following command.
sudo nano /etc/dphys-swapfile
  1. Once the file has opened, we need to change the value of the CONF_SWAPSIZE variable. This variable controls the size of the swap file. By default, this value is set to 100. We need to increase this value to 200. You can do this by changing the value of the CONF_SWAPSIZE variable to 200.
CONF_SWAPSIZE=200
  1. Once you have made the change, you can save the file by pressing CTRL+X and then Y to confirm the save. You can then exit the editor by pressing ENTER.

  2. We can now re-initialize the Raspberry Pi’s swap file by running the command below. Running this command will delete the original swap file and recreate it to fit the newly defined size.

sudo dphys-swapfile setup
  1. With the swap file re-initialized, we can now start it again by running the following command.
sudo dphys-swapfile swapon
  1. If you want all programs to be reloaded with access to the new memory pool, then the easiest way is to restart your device.
sudo reboot

Now what?

You can use Docker to run any application that you want. You can even run multiple applications at the same time. So here is a list of applications I use on my Raspberry Pi.

Bitwarden

Bitwarden is a free and open-source password manager that allows you to store all of your passwords in one secure location. This is a great way to keep all of your passwords safe and secure.

Here is a guide on how to install Bitwarden on your Raspberry Pi.

  1. Install Bitwarden image using CLI.
docker pull vaultwarden/server:latest
  1. Once Docker finishes downloaded Bitwarden RS to your Raspberry Pi, you can now continue.
sudo docker run -d --name bitwarden \
--restart=always \
-v /bw-data/:/data/ \
-p 127.0.0.1:9999:80 \
-p 127.0.0.1:3012:3012 \
vaultwarden/server:latest

Uptime Kuma

Uptime Kuma is a free and open-source uptime monitoring tool that allows you to monitor the status of your websites and services. This is a great way to keep track of the status of your websites and services.

Here is a guide on how to install Uptime Kuma on your Raspberry Pi.

  1. Install Uptime Kuma image using CLI.
docker pull louislam/uptime-kuma
  1. Create a volume for Uptime Kuma.
docker volume create uptime-kuma
  1. Start the Uptime Kuma container.
docker run -d --name uptime-kuma \
--restart=always \
-p 9998:3001 \
-v uptime-kuma:/app/data \
louislam/uptime-kuma

Netdata

Netdata is a free and open-source real-time performance monitoring tool that allows you to monitor the performance of your Raspberry Pi. This is a great way to keep track of the performance of your Raspberry Pi.

Here is a guide on how to install Netdata on your Raspberry Pi.

  1. Install Netdata image using CLI.
docker pull netdata/netdata
  1. Start the Netdata container.
docker run -d --name netdata \
--restart=always \
-p 9997:19999 \
-v netdataconfig:/etc/netdata \
-v netdatalib:/var/lib/netdata \
-v netdatacache:/var/cache/netdata \
-v /proc:/host/proc:ro \
-v /sys:/host/sys:ro \
-v /etc/os-release:/host/etc/os-release:ro \
--cap-add SYS_PTRACE \
--security-opt apparmor=unconfined \
netdata/netdata

Conclusion

Thank you for reading this guide. If you found this guide useful, please consider sharing it with your friends and family.

How we created Flatiron Open Source - Backend - Part 2

· 6 min read
Umut YILDIRIM
Fullstack Developer
Ian Gottheim
Fullstack Developer

Welcome to part 2 of our Flatiron Open Source adventure, where we will talk about our backend setup. We fully utilized Cloudflare in our project. We are going to explain this process step by step.

Step 1: Obtaining Free Domain

We are going to a obtain our free development domain by using a service called Freenom. Freenom Landing Create an account and find an available domain. When you finish obtaining your domain proceed to step 2. Freenom Domains

Step 2: Creating your Cloudflare Account

Cloudflare is a DNS(Domain Name Service). You can add your domain to Cloudflare and Cloudflare will protect you against bad actors and DOS attacks. Now let's create your account by clicking the Sign Up button in the top right of the webpage. Cloudflare After registering to Cloudflare you will be redirected to dashboard. You will need to click the Add Site button at the top of the webpage and enter your Freenom domain from step 1. Cloudflare Add Site Once you add your domain you will be given 2 nameserver addresses. You need to access your Freenom dashboard and click the Manage Domain button. Once finished loading, click on Managment Tools, and then Nameservers. Cloudflare Nameserver You need to copy your Cloudflare nameservers and paste them into the Freenom nameserver textboxes, and then click the Change Nameservers button. Once you click the update button, the process can take up to 20 minutes to complete. Once Cloudflare finishes the setup, you will recieve an email congratulating you on setting up your first domain.

Step 3: Cloudflare Pages Setup

Cloudflare Pages allows you to deploy your dynamic front-end applications. The platform is super fast, and is always up to date by deploying directly from your Git provider (this assumes you have a github account).

You can also check their documentation.

  1. Log in to the Cloudflare dashboard.
  2. Select your account in Account Home > Pages.
  3. Select Create a project > Connect to Git.

Configure your deployment

Once you have selected a Git repository, select Install & Authorize and Begin setup. You can then customize your deployment in Set Up Builds And Deployments.

Your project name will be used to generate your project’s hostname. By default, this matches your Git project name.

The production branch indicates the branch that Cloudflare Pages should use to deploy the production version of your site. For most projects, this is the main or master branch.

Cloudflare Name Project

Since we are using Vite 3 we will follow Vite 3 deployment for Cloudflare pages.

  1. Log in to the Cloudflare dashboard and select your account.
  2. Go to Pages > Create a project > Connect to git.
  3. Select your new GitHub repository.
  4. In the Set up builds and deployments, set yarn build as the Build command, and dist as the Build output directory.
  5. Select Environment variables (advanced) > + Add variable > configure a NODE_VERSION variable with a value of any version of Node greater than 14.18 – this example uses 16.

After completing configuration, select Save and Deploy,and wait for deployment to finish. After you have deployed your project, it will be available at the <YOUR_PROJECT_NAME>.pages.dev subdomain. After testing the website, you can add a custom domain to this website. This is becuase our domain is on Cloudflare, so we can easily add our domain to Cloudflare Pages. Click on Custom Domains and type your domain name. Cloudflare will automatically update your DNS record.

Step 4: Cloudflare KV Setup

For backend work, Cloudflare has a plaform called Workers KV. Workers KV is a global, low-latency, key-value data store. It stores data in a small number of centralized data centers, then caches that data in Cloudflare’s data centers after access.

KV supports exceptionally high read volumes with low latency, making it possible to build highly dynamic APIs and websites that respond as quickly as a cached static file would. There are some request limits and speed limits, that can be read about in the documentation here

Setup

Cloudflare KV Click on Workers, then KV on sidebar. Then click the Create namespace button and give your namespace name. Finally, add a demo key and value to test functionality.

Why We Used a Python Script for Backend Work

You might realize there is a python script on our projects root. We created that script based on the content provided by Flatiron School upon graduation (as discussed in part 1 of the blog). When you run this script it will scrape through the Flatiron files, and create a formatted output file with the necessary key:value information for the project's backend. After running the script, the content was exported to Cloudflare KV.

Step 5: Cloudflare Workers Setup

This is a simple worker that can be used to request data from Workers KV and send it to client side. It is very fast and can be used to serve static files.

Setup

To setup the worker install the Cloudflare CLI (command line interface), called wrangler, as a global package

yarn global add @cloudflare/wrangler

or

npm install -g @cloudflare/wrangler

Then run

wrangler login

and follow the instructions.

Usage

To use the worker you need to create a KV namespace and upload the files you want to serve. Then you need to add the namespace id to the wrangler.toml file.

kv-namespaces = [
{ binding = "your_binding_for_env", id = "your_namespace_id" }
]

Also change the account id with you account id in the wrangler.toml file.

account_id = "your_account_id"

Then you can run

wrangler publish

You can find the source code here.

We also gave a custom domain to this Worker. You can also give a costum domain to this worker like this image bellow. Cloudflare Workers Domain

Step 6: Cloudflare R2 Setup

We are using Cloudflare R2 because Product Design is a resources intensive cohort. There is more than 250MB of data, which is why we decided to use R2 instead of uploading these resources on the assets folder of our Github. We also gave our R2 a custom domain so we can access it on our website. This documentation explains how to add a custom domain on your R2.

Conclusion

Cloudflare is a popular choice for SaaS companies due to its wide range of free and inexpensive services. Its user-friendly documentation and abundance of resources make it an especially appealing option for new full stack engineers. I highly recommend considering Cloudflare for your project.

If you encounter any issues, it is always a good idea to try searching for a solution online. Google is a great place to start. If you are unable to find a solution through a Google search, you can also try visiting Cloudflare's community page for additional help and support. curl $REPLIT_DB_URL/key

Thank you for reading our blog post and don't forget to check our last part of Flatiron Open Source.

How we created Flatiron Open Source - Frontend

· 8 min read
Umut YILDIRIM
Fullstack Developer
Ian Gottheim
Fullstack Developer

Flatiron School

Flatiron School is a 15-week coding bootcamp, with courses in software engineering, data science, cybersecurity, and product design.

Upon completion of the course, students lose access to the internal class portal, called Canvas. Students are provided with html and javascript data files, containing information on the modules and courses used throughout the phases of the bootcamp.

Flatiron Open Source

The challenge we Hope, Ian set out of accomplish is how to make the data provided upon graduation user friendly for future review and preparation for interviews. This is what sparked the idea for Flatiron Open Source. The goal was to recreate the internal class portal for Flatiron graduates to use and collaborate.

This blog will run through the process to create the front end, while future blogs will discuss back end and user implementation

How we structured our front-end

assets/css

  • css setup and configuation with tailwind

components

  • This is where the site components are created for the specific site modules. Information has been passed down using params.

data

  • Used for params for routing to the correct links to the backend

views

  • This is where the components are rendered. Items from the components page are imported into views.

main

  • The main jsx file that gets rendered for the script for index.html. The views are imported into main, where react router is used to move through the different data.

bundle-server

Tailwindcss/DaisyUI

CSS styling was completed using the DaisyUI plugin for TailwindCSS. The inclusion of DaisyUI makes creating components seamless, and if needed allows for customization.

To get started, you must have the runtime environment, Node.js, installed, to use the package manager npm commands. Another option, which was used for this project, is to have yarn installed on your computer.

The documentation for tailwind can be found here

Step 1 Install the required packages

yarn add -D tailwindcss postcss autoprefixer
yarn tailwindcss init -p
yarn add daisyui
yarn add react-daisyui

Step 2 Set Up your tailwind.config.js files

Source code for Flatiron Open Source tailwind.config.js file can be found here

The most important piece of the source code is the additional plugin to require daisyUI.

Step 3 Add the Tailwind directives to your CSS

Source code for Flatiron Open Source tailwind directives can be found here.

That is all you need to do. The final piece is the read the documentation, and import DaisyUI components you would like to use to make styling seamless for your project.

Client Side Routing

In a React application, client-side routing refers to the process of navigating to different pages or views within the app by updating the URL in the browser, without triggering a full page reload. This allows for a smoother and more efficient user experience, as the app can quickly update the displayed content without having to fetch it from the server.

One popular library for implementing client-side routing in a React app is React Router. React Router provides a collection of components that can be used to declaratively define the different routes in your app and the components that should be displayed for each route.

Vite is a build tool that is well suited for building SPA (Single-Page Applications) with React, it provides a fast development experience with a simple setup, it also provides built-in support for client-side routing by using ES modules and it's live-reloading feature.

You can use Vite with React Router to handle client-side routing in your app. You'll need to install React Router;

yarn add react-router-dom

Then import react-router-dom to your main.jsx. This is our example;

import React from 'react';
import ReactDOM from 'react-dom/client';
import { BrowserRouter, Routes, Route } from "react-router-dom";
import './assets/css/index.css';

// Routes
/* Landing Pages */
import Landing from './views/Landing';
import Courses from './views/Courses';
import Course from './views/Course';

/* Error Pages */
import NotFound from './views/errors/NotFound';

ReactDOM.createRoot(document.getElementById('root')).render(
<React.StrictMode>
<BrowserRouter>
<Routes>
{/* Landing Pages */}
<Route path="/" element={<Landing />} />
<Route path="/courses/:course" element={<Courses />} />
<Route path="/course/:course/:phase" element={<Course />} />

{/* Error Pages */}
<Route path='/*' element={<NotFound />} />
</Routes>
</BrowserRouter>
</React.StrictMode>
);

IDs and courses.js

Now you might ask, what is this;

<Route path="/course/:course/:phase" element={<Course/>}/>

We are using useParams react hook from react-router-dom to recieve data from URL. :course is course ID like product-design so we can request related data from courses.js.

The useParams react hook allows for dynamic routing, in this case used for the url slug. It also helps to dynamically render the correct data for the course page.

Why Vite 3

In most React applications, software engineers use the command below to scaffold React projects:

create-react-app <app-name>

The downsides to this command is speed. Create-React-App is a bundle based development server. It uses webpack, which bundles the application code before serving. The larger the codebase, the more time this will take.

bundle-server

Vite is a front-end Native ESM based development server with several advantages over create-react-app:

  • It takes advantage of the availability of native ES modules in the browser, and the rise of JavaScript tools written in compile-to-native-languages
  • Vite enhance start time by dividing the modules in an app into dependencies and source code
    • Dependencies: plain JavaScript that does not change during development (component libraries). Dependencies are pre-bundled using esbuild.
    • Source Code: non-plain JavaScript that needs transforming (JSX/CSS components). Source code is served over native ESM.Vite only serves source code as the browser requests the data and the data is currently being used. -Vite supports Hot Module Replacement (HMR) over native ESM. When a file is edited, VITE only needs to invalidate one chain between the edited module and the closest boundary, instead of re-constructing the entire site as a bundler native-esm

The documentation found here will help you start a project with vite.

yarn create vite
yarn create vite my-react-app --template react

Google Analytics

vite-plugin-radar is a Vite plugin that allows you to easily add Google Analytics and Google Tag Manager to your website.

To use the plugin, you'll first need to install it as a dependency:

yarn add vite-plugin-radar

Then, you need to register it in your vite.config.js file:

import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
import ViteRadar from 'vite-plugin-radar'

// https://vitejs.dev/config/
export default defineConfig({
plugins: [
react(),
ViteRadar({
// Google Analytics tag injection
analytics: {
id: 'G-QYFSS4RFJQ',
},
gtm: {
id: 'GTM-TTWKD6W',
},
})
],
})

Here, analytics is your Google Analytics tracking code, which is used to track the user's actions and behavior on your website. gtm is your Google Tag Manager container code, which is used to manage your analytics tags.

Once this is done, the plugin will automatically add the required GA and GTM scripts to your HTML page and configure them based on the options you provided in the configuration.

Please note that The plugin is in beta state and its API might change, you might want to consult the documentation of the plugin for the most up-to-date information.

Responsive Design

Responsive design is a method of designing and building websites that adapt to the different screen sizes and devices that people use to access the web. It's important because more and more people are accessing the internet from a variety of devices, including smartphones, tablets, laptops, and desktops.

Tailwind CSS is a popular utility-first CSS framework that can be used to build responsive designs. It provides a wide variety of utility classes that can be used to apply CSS styles to HTML elements quickly and easily. These classes are designed to be highly composable, which means you can use them together in different combinations to create complex layouts and designs.

One of the ways Tailwind CSS helps with responsive design is through its use of "responsive prefixes" that can be added to utility classes. These prefixes allow you to apply different styles to an element based on the size of the screen. For example, you can use the "sm" prefix to apply a style only when the screen is at least 768 pixels wide, or the "lg" prefix to apply a style only when the screen is at least 1280 pixels wide.

Keep in mind that, as with all utility frameworks, it can be harder to maintain when trying to customize complex design, for that reason is recommended to use it in conjunction with a css-in-js lib or some custom css that complement the design. So don't forget to check out TailwindCSS's documentation for up-to-date information.

How to Create Open Source Obsidian Digital Garden

· 3 min read
Umut YILDIRIM
Fullstack Developer

Are you intrigued by networked note-taking apps?

Do you want to share your own knowledge base with everyone?

Have you heard about the digital garden craze sweeping the nation and want to make one of your own?

Maybe Obsidian + Netlify will be as good to you as they have been to me.

In addition to being a great note-taking tool, Obsidian functions as an excellent content manager. When combined with a git-based deployment solution like Netlify (and a few plugins), it compares favorably to other git-based CMS's such as Forestry and Netlify CMS, with the added benefit of backlinks, graph views, and a bunch of bells and whistles.

So what are you waiting for? Follow my steps and create your own digital garden. My own digital garden and this is what it will look like when you are done with this tutorial. Hope's Garden

Note: This is work-in-progress tutorial. If you spot any problems don't hesitate to e-mail me. E-Mail

Git & Github

If your computer doesn’t have Git you should install it. Official Link After setting up your Git. You should go to terminal and write these lines to your preferred terminal.

$ git config --global user.name "Your Name" 
$ git config --global user.email [email protected]

If you receive any error like 'Git not found', this means you forgot to add 'git' to your operating systems path. Google is your friend if you receive any error :)

This projects uses Github to host your vault contents. You need to create Github account for this tutorial to work. Create an private Github repository.

Setting up Obsidian

Download Obsidian from their official website. Then click to 'Create new vault'. After creating your vault go to settings and deactivate 'Safe Mode'. This will allow us to install community plugins. Let's install few plugins, shall we? Here is the list of plugins you need to install.

  • Advanced Tables
  • Better Word Count
  • Copy button for code blocks
  • Emoji Toolbar
  • Fullscreen mode plugin
  • Hider
  • Mind Map
  • Obsidian Git +
  • Obsidian Link Converter +

These are the plugins I use personally you can wish to not install all of them but '+' icons are required for our setup.

After creating your vault and downloading all of your desired plugins you need to point out your vault to Github repository. Open up your terminal and type these;

$ git init
$ git add .
$ git commit -m "REVIVE MY GARDEN"
$ git branch -M main
$ git remote add origin https://github.com/yourgithubusername/yourgithubrepositoryname.git
$ git push -u origin main

Here we go! Now your notes should arrive in your Github repository and we are finally ready to publish our website on www.

Setting up Netlify

Download this gist and move it to your vault folder. Edit it however you like.

Create an Netlify account on their website. After creating you will be prompted with website setup guide. Follow the instructions given to you. Let them access your Github account and point it to repository you created for your Obsidian vault.

Let Netlify build your website for you. After the build, congratulations you finally finished setting up your own digital garden! Now all you need to do is filling up your digital garden🪴.

Building a Website Screenshot API with Puppeteer and Google Cloud Functions

· 4 min read
Umut YILDIRIM
Fullstack Developer

Website Screenshot API

In this blog post, I describe the steps I took to set up this API, let’s dive in!

Puppeteer

Puppeteer is a node package that allows you to control a headless chrome browser using Javascript. A headless chrome browser is just a browser without a window.

I can use this package to spin up a headless chrome instance, navigate to a website and take a screenshot.

To start I’m going to create a local node project and install the puppeteer package.

npm init
npm install puppeteer

Now I can create a file called index.js and add the following code.

const puppeteer = require('puppeteer');

takeScreenshot()
.then(() => {
console.log("Screenshot taken");
})
.catch((err) => {
console.log("Error occured!");
console.dir(err);
});

async function takeScreenshot() {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto("https://medium.com", {waitUntil: 'networkidle2'});


const buffer = await page.screenshot({
path: './screenshot.png'
});

await page.close();
await browser.close();
}

Note that I am making the takeScreenshot() function async. This way I can use the await keyword in the function to wait for all the promises.

After running the code I get the following screenshot! 🎉

Screenshot of Medium

Google Cloud Functions

So I now have a local script that I can call to take a screenshot, but I want to build an API. The next logical step is to put this script on a server somewhere.

I don’t want to worry about my server running out of memory, so I’m going to put it on Google Cloud Functions. This way it can handle a huge number of requests without me having to worry about buying more RAM memory.

Once I have the cloud function running, I can call it with an HTTP request — meaning that I will have a working screenshot API 🚀

Let’s port the previous code to the Google Cloud Function format. The cloud function I created is async and called run().

So far I have a working screenshot API. But I’m going to extend it by uploading the screenshots directly to Google Storage.

I’m going to use the @google-cloud/storage npm package for this. Note that I have created a Google Cloud Storage bucket called screenshot-api checkout this page for how to set up a storage bucket.

const puppeteer = require('puppeteer');
const { Storage } = require('@google-cloud/storage');

const GOOGLE_CLOUD_PROJECT_ID = "portfolio-umut-yildirim";
const BUCKET_NAME = "screenshot-jobs-portfolio-umut-yildirim";

exports.run = async (req, res) => {
res.setHeader("content-type", "application/json");

try {
const buffer = await takeScreenshot(req.body);

let screenshotUrl = await uploadToGoogleCloud(buffer, req.body.name+".png");

res.status(200).send(JSON.stringify({
'screenshotUrl': screenshotUrl
}));

} catch(error) {
res.status(422).send(JSON.stringify({
error: error.message,
}));
}
};

async function uploadToGoogleCloud(buffer, filename) {
const storage = new Storage({
projectId: GOOGLE_CLOUD_PROJECT_ID,
});

const bucket = storage.bucket(BUCKET_NAME);

const file = bucket.file(filename);
await uploadBuffer(file, buffer, filename);

await file.makePublic();

return `https://${BUCKET_NAME}.storage.googleapis.com/${filename}`;
}

async function takeScreenshot(params) {
const browser = await puppeteer.launch({
args: ['--no-sandbox']
});
const page = await browser.newPage();
await page.goto(params.url, {waitUntil: 'networkidle2'});

const buffer = await page.screenshot();

await page.close();
await browser.close();

return buffer;
}

async function uploadBuffer(file, buffer, filename) {
return new Promise((resolve) => {
file.save(buffer, { destination: filename }, () => {
resolve();
});
})
}

The new result — My postman client is showing the URL to the screenshot 🚀

Note that in the code above each screenshot is saved as screenshot.png on Google Storage. In the real world, you would need to generate a random id for each image.

Conclusion

Here’s the source of a Google Cloud function that, using Puppeteer, takes a screenshot of a given website and store the resulting screenshot in a bucket on Google Cloud Storage. This was a fun project to do.

You can find the source code of the completed Google Cloud function and package.json here.

Thanks for reading!

Rails + PostgreSQL Array

· 4 min read
Umut YILDIRIM
Fullstack Developer

If you continue to read this article, I assume that you know Ruby, OOP in Ruby, RoR, and Active Record. Yes, Postgresql support Array types to store. Based on their documentation:

PostgreSQL allows columns of a table to be defined as variable-length multidimensional arrays. Arrays of any built-in or user-defined base type, enum type, composite type, range type, or domain can be created. Let's start our journey! (I use Rails API-only as example, but this article can be implemented in normal Rails as well)

Migration

It is simple:

# db/migrate/*_create_books.rb
class CreateBooks < ActiveRecord::Migration[6.0]
def change
create_table :books do |t|
t.string :title
t.string :tags, array: true, default: []
t.integer :ratings, array: true, default: []

t.timestamps
end
add_index :books, :tags, using: 'gin'
add_index :books, :ratings, using: 'gin'
end
end

If you want to add new column:

# db/migrate/*_add_subjects_to_books.rb
class AddSubjectsToBooks < ActiveRecord::Migration
def change
add_column :books, :subjects, :string, array:true, default: []
end
end
info

I define the column as t.string :tags, array: true not t.array :tags.

Compare to jsonb, which t.jsonb :payload. This is because there is no "array" type in PostgreSQL, only "array of column type".

PostgreSQL arrays aren't generic containers like Ruby arrays, they are more like arrays in C, C++, etc.

Create

Create a record is very simple too:

irb(main):001:0> Book.create(title: "Hacking Growth", tags: ["business", "startup"], ratings: [4, 5])
(0.1ms) BEGIN
Book Create (0.6ms) INSERT INTO "books" ("title", "tags", "ratings", "created_at", "updated_at") VALUES ($1, $2, $3, $4, $5) RETURNING "id" [["title", "Hacking Growth"], ["tags", "{business,startup}"], ["ratings", "{4,5}"], ["created_at", "2020-06-29 08:48:42.440895"], ["updated_at", "2020-06-29 08:48:42.440895"]]
(0.4ms) COMMIT
=> #<Book id: 1, title: "Hacking Growth", tags: ["business", "startup"], ratings: [4, 5], created_at: "2020-06-29 08:48:42", updated_at: "2020-06-29 08:48:42">

Show

Both tags and ratings now an array object:

irb(main):002:0> book = Book.first
Book Load (0.3ms) SELECT "books".* FROM "books" ORDER BY "books"."id" ASC LIMIT $1 [["LIMIT", 1]]
irb(main):003:0> book.tags
=> ["business", "startup"]
irb(main):004:0> book.tags[0]
=> "business"

Update

To update, the most easiest way is:

irb(main):005:0> book.tags << 'management'
=> ["business", "startup", "management"]
irb(main):0006:0> book.save!
(0.1ms) BEGIN
Book Update (1.2ms) UPDATE "books" SET "tags" = $1, "updated_at" = $2 WHERE "books"."id" = $3 [["tags", "{business,startup,management}"], ["updated_at", "2020-06-29 08:54:36.731328"], ["id", 1]]
(0.4ms) COMMIT
=> true
irb(main):007:0> book.tags
=> ["business", "startup", "management"]

And any other way to add a value to an array object:

# This works
book.tags << 'management'

#This will work too
book.tags.push 'management'

# This is also will work
book.tags += ['management']
But do not do this: Book.first.tags << 'finance', it won't be saved to the database. Prove:
irb(main):008:0> Book.first.tags << "finance"
Book Load (0.3ms) SELECT "books".* FROM "books" ORDER BY "books"."id" ASC LIMIT $1 [["LIMIT", 1]]
=> ["business", "startup", "management", "finance"]
irb(main):009:0> Book.first.save!
Book Load (0.3ms) SELECT "books".* FROM "books" ORDER BY "books"."id" ASC LIMIT $1 [["LIMIT", 1]]
=> true
irb(main):010:0> Book.first.tags
Book Load (0.3ms) SELECT "books".* FROM "books" ORDER BY "books"."id" ASC LIMIT $1 [["LIMIT", 1]]
=> ["business", "startup", "management"]

If you want to use raw SQL, you can check to the official documentation.

Query

Let say we want to search every single Book that have tags management:

# This is valid
irb(main):011:0> Book.where("'management' = ANY (tags)")

# This is more secure
irb(main):012:0> Book.where(":tags = ANY (tags)", tags: 'management')

# This is also valid
irb(main):013:0> Book.where("tags @> ?", "{management}")

What if we want to search every single book that DO NOT HAVE tags management:

irb(main):013:0> Book.where.not("tags @> ?", "{management}")

You can see the operators and their description in the official documentation.

Now, what if we want to search book that contain multiple tags, like management and startup:

# This is valid
irb(main):014:0> Book.where("tags @> ARRAY[?]::varchar[]", ["management", "startup"])

# This is valid
irb(main):015:0> Book.where("tags && ?", "{management,startup}")

# If you use where.not, you basically search for all that do not contain the parameter given.

Now what if we want to search all book that have rating more than 3:

irb(main):016:0> Book.where("array_length(ratings, 1) >= 3")

How about making our search a little bit more robust and supporting pattern matching:

# %gem% is manaGEMent 
irb(main):017:0> Book.where("array_to_string(tags, '||') LIKE :tags", tags: "%gem%")

You can see all the operators and functions and their description in the official documentation.

Final Word

That's all from me. I'll update if I find something interesting.

source: myself and extract from many articles

Ruby Active Record Built-In Methods Beginner Guide

· 6 min read
Umut YILDIRIM
Fullstack Developer

It doesn’t matter if you are a beginner or a seasoned developer. From time to time, you need to look up certain programming language documentation to help you navigate through specific information to build up your application which then solves a definite task or problem. However, this skill is especially important for beginner developers since beginner developers, like myself, don’t have many experiences in using built-in methods nor understanding how to implement the methods and the keywords to look for. For example, some methods require certain numbers of parameters, or even just simply land on the right page on the documentation.

The purpose of this article is to assist and guide other beginner developers on where to start when they read and refer to Ruby built-in methods in Ruby on Rails Guide. This time I am going to focus on introducing the Models section which is related to setting up your backend database from scratch. Ruby on Rails Docs For Active Record

Before diving into the Models, let’s take a look at what Active Record is first. We know that Rails contains seven Ruby gems, which work harmoniously together, and Active Record is one of them and taking care of all the database stuff, also known as an “ORM”. ORM stands for Object-Relational-Mapping and it means Active Record stores data in a database table kind of structure using rows and columns and the data can be modified or retrieved by writing SQL statements. Moreover, Active Record allows you to interact with that data as if it’s a normal Ruby object.

What is Active Record?

Active Record Active Record is the M in MVC — the model — which is the layer of the system responsible for representing business data and logic. Active Record facilitates the creation and use of business objects whose data requires persistent storage to a database. It is an implementation of the Active Record pattern which itself is a description of an Object Relational Mapping system.

What is UML?

UML UML, short for Unified Modeling Language, is a standardized modeling language consisting of an integrated set of diagrams, developed to help system and software developers for specifying, visualizing, constructing, and documenting the artifacts of software systems, as well as for business modeling and other non-software systems. The UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems. The UML is a very important part of developing object oriented software and the software development process. The UML uses mostly graphical notations to express the design of software projects. Using the UML helps project teams communicate, explore potential designs, and validate the architectural design of the software. In this article, we will give you detailed ideas about what is UML, the history of UML and a description of each UML diagram type, along with UML examples.

All About Active Record Migrations

Now you understand Models section is related to setting up your backend database but still where should you start looking? First thing first, we need to define our schema so our apps know what type of data they are receiving and storing. This is the time you can refer to the Active Record Migrations page to review some migration definitions that are supported by the built-in change method, as below: (To see a full list, you can visit here)

  • add_column
  • add_foreign_key
  • create_join_table
  • create_table
  • drop_join_table
  • drop_table (must supply a block)
  • remove_column (must supply a type)
  • remove_foreign_key (must supply a second table)
  • rename_column
  • rename_index
  • rename_table

After all, we are humans and we make mistakes, or after a while, you need to change the database table, such as adding or removing columns, based on the new business decision and your supervisor needs you to make the changes. You may or may not remember how to make the change so it’s the place you can find the information you need.

All About Active Record Associations

Now after we have the ideas of how we would like to define the schema, we also need to think about the relationship between each database table. For example, we have a table to store all the books and another table contains all the book authors. Should we use has_many and belongs_to types of association or has_and_belongs_to_many association? Since there may be a case that one book has more than one author. When you come across association-related questions that you can search for the Active Record Associations page.

All About Active Record Query Interface

Eventually, you would like to retrieve the data from the database after you have finished all the database creation and it’s the moment you would like to spend your time studying Active Record Query Interface page which covers different ways that you can interact with your data using Active Record. Personally, Active Record Query Interface is the one that I visit the most, and easily get confused about which finder method should I use and how many arguments that I can pass into the method.

Below is a list of often used finder methods:

  • find
  • extending
  • from
  • group
  • having
  • includes
  • joins
  • lock
  • none
  • order
  • reorder
  • reselect
  • reverse_order
  • select
  • where

For retrieving a single object, below are the methods you can consider:

  • find
  • take
  • first
  • last
  • find_by

For retrieving multiple objects in batches, below are the methods you can consider:

  • find_each
  • find_in_batches
  • first
  • last
  • find_by

Although, some of the finder methods you may think their functions look similar at first, you should always double-check what are their expected return result and decide which method you should use. For example, how do you decide when to use find_by, where or find if you want to search for data from the database?

Use find_by , if you’re expecting a single record or nil as a return Use where , if you’re expecting an ActiveRecord::Relation object as a return Use find_by , if you’re expecting a single record (by its primary column, usually id ) as a return Key Takeaway

Knowing and understanding where you get stuck is the key before you jump right into the Ruby on Rails Guide and start looking for answers. Ask yourself this question first — At what step am I having a problem now? Is it migration, association, or retrieving the data (query interface)?

Resources

Handling user authentication with Firebase in your React apps

· 15 min read
Umut YILDIRIM
Fullstack Developer

Header Image

Nowadays, security is very important on websites and apps. That’s mainly to ensure that private data is not leaked to the public and someone doesn’t do actions on your behalf.

Today, we are going to use Firebase, which is a BaaS that helps us with various services such as database authentication and cloud storage. We are going to see how we can use the authentication service in Firebase to secure our React app.

Requirements for authenticating with Firebase in React

  • Node.js installed
  • Code editor — I prefer Visual Studio Code
  • Google account — we need this to use Firebase
  • Basic knowledge of React — I won’t recommend this tutorial for complete beginners in React

Setting up Firebase

Before we dive into React and start coding some good stuff, we need to set up our own Firebase project through the Firebase Console. To get started, navigate your browser to Firebase Console. Make sure you are logged into your Google account.

Now, click on Add project and you should be presented with the following screen:

Once you’ve given it a sweet name, click on Continue and you should be prompted for an option to enable Google Analytics. We don’t need Google Analytics for this tutorial, but turning it on won’t do harm, so go ahead and turn it on if you want.

Once you’ve completed all the steps, you should be presented with the dashboard, which looks something like this:

Firebase Project Created First, let’s set up authentication. Click on Authentication on the sidebar and click on Get Started to enable the module. Now you will be presented with various authentication options:

Firebase Auth Page First click on Email/Password, enable it, and save it:

Email Password Options Firebase Now press on Google:

Google Options Firebase Press enable, select a project support email address, and click on Save to activate Google Authentication for our app.

Now, let’s set up the database we are going to use for our project, which is Cloud Firestore. Click on Cloud Firestore on the sidebar and click on Create Database. You should be presented with the following dialog:

Cloud Firestore Create Database Remember to select Start in test mode. We are using test mode because we are not dealing with production-level applications in this tutorial. Production mode databases require a configuration of security rules, which is out of the scope of this tutorial.

Click Next. Select the region. I’ll leave it to the default, and then press Enable. This should completely set up your Cloud Firestore database.

Creating and setting up a React app

Navigate to a safe folder and type the following command in the terminal to create a new React app:

npx create-react-app appname

Remember to replace appname with a name of your choice. The name doesn’t really affect how the tutorial works. Once the React app is successfully created, type the command to install a few npm packages we will need throughout the project:

npm install firebase react-router-dom react-firebase-hooks

Here, we are installing firebase to communicate with Firebase services, and we are also installing react-router-dom to handle the routing of the application. We use react-firebase-hooks to manage the authentication state of the user.

Type the following command to run your React app:

cd appname && npm start

This should fire up your browser and you should see the following screen: React App Beginning Setup Firebase Now, let’s do some cleanup so that we can continue with coding. Delete the following files from the src folder: App.test.js, logo.svg, and setupTests.js. Once you delete these files, delete all the contents of App.css and you will see an error on the React app. Don’t worry; just remove the logo imports in App.js and empty the div so that your App.js looks like this:

import './App.css';
function App() {
return (
<div className="app">

</div>
);
}
export default App;

Integrating Firebase into our React app

Go to your Firebase Console dashboard, click on Project Settings, scroll down, and you should see something like this:

Project Settings No App Firebase Click on the third icon to configure our Firebase project for the web. Enter the app name and click on Continue. Go back to the project settings and you should now see a config like this:

Web Config Firebase Console Copy the config. Create a new file in the src folder named firebase.js. Let’s first import firebase modules, since Firebase uses modular usage in v9:

import { initializeApp } from "firebase/app";
​​import {
​​ GoogleAuthProvider,
​​ getAuth,
​​ signInWithPopup,
​​ signInWithEmailAndPassword,
​​ createUserWithEmailAndPassword,
​​ sendPasswordResetEmail,
​​ signOut,
​​} from "firebase/auth";
​​import {
​​ getFirestore,
​​ query,
​​ getDocs,
​​ collection,
​​ where,
​​ addDoc,
​​} from "firebase/firestore";

Now paste in the config we just copied. Let’s initialize our app and services so that we can use Firebase throughout our app:

const app = ​​initializeApp(firebaseConfig);
​​const auth = getAuth(app);
​​const db = getFirestore(app);

This will use our config to recognize the project and initialize authentication and database modules.

We will be creating all important authentication-related functions in firebase.js itself. So first look at the Google Authentication function:

const googleProvider = new GoogleAuthProvider();
const signInWithGoogle = async () => {
try {
const res = await signInWithPopup(auth, googleProvider);
const user = res.user;
const q = query(collection(db, "users"), where("uid", "==", user.uid));
const docs = await getDocs(q);
if (docs.docs.length === 0) {
await addDoc(collection(db, "users"), {
uid: user.uid,
name: user.displayName,
authProvider: "google",
email: user.email,
});
}
} catch (err) {
console.error(err);
alert(err.message);
}
};

In the above code block, we are using a try…catch block along with async functions so that we can handle errors easily and avoid callbacks as much as possible.

First, we are attempting to log in using the GoogleAuthProvider Firebase provides us. If the authentication fails, the flow is sent to the catch block.

Then we are querying the database to check if this user is registered in our database with the user uid. And if there is no user with the uid, which also means that the user is new to our app, we make a new record in our database with additional data for future reference.

Now let’s make a function for signing in using an email and password:

const logInWithEmailAndPassword = async (email, password) => {
try {
await signInWithEmailAndPassword(auth, email, password);
} catch (err) {
console.error(err);
alert(err.message);
}
};

This code is very simple. Since we know that the user is already registered with us, we don’t need to check the database. We can proceed with the authentication right away. We just pass in the email and password to signInWithEmailAndPassword functions, which is provided to us by Firebase.

Now, let’s create a function for registering a user with an email and password:

const registerWithEmailAndPassword = async (name, email, password) => {
try {
const res = await createUserWithEmailAndPassword(auth, email, password);
const user = res.user;
await addDoc(collection(db, "users"), {
uid: user.uid,
name,
authProvider: "local",
email,
});
} catch (err) {
console.error(err);
alert(err.message);
}
};

Since we know that the user is new to our app, we create a record for the user without checking if there is one existing in our database. It’s similar to the approach we used in Google Authentication but without checks.

Create a function that will send a password reset link to an email address:

const sendPasswordReset = async (email) => {
try {
await sendPasswordResetEmail(auth, email);
alert("Password reset link sent!");
} catch (err) {
console.error(err);
alert(err.message);
}
};

This code is simple. We are just passing in the email in the sendPasswordResetEmail function provided by Firebase. The password reset email will be sent by Firebase.

And finally, the logout function:

const logout = () => {
signOut(auth);
};

Nothing much here, just firing up the signOut function from Firebase, and Firebase will do its magic and log out the user for us.

Finally we export all the functions, and here’s how your firebase.js should finally look:

import { initializeApp } from "firebase/app";
import {
GoogleAuthProvider,
getAuth,
signInWithPopup,
signInWithEmailAndPassword,
createUserWithEmailAndPassword,
sendPasswordResetEmail,
signOut,
} from "firebase/auth";
import {
getFirestore,
query,
getDocs,
collection,
where,
addDoc,
} from "firebase/firestore";
const firebaseConfig = {
apiKey: "AIzaSyDIXJ5YT7hoNbBFqK3TBcV41-TzIO-7n7w",
authDomain: "fir-auth-6edd8.firebaseapp.com",
projectId: "fir-auth-6edd8",
storageBucket: "fir-auth-6edd8.appspot.com",
messagingSenderId: "904760319835",
appId: "1:904760319835:web:44fd0d957f114b4e51447e",
measurementId: "G-Q4TYKH9GG7",
};
const app = initializeApp(firebaseConfig);
const auth = getAuth(app);
const db = getFirestore(app);
const googleProvider = new GoogleAuthProvider();
const signInWithGoogle = async () => {
try {
const res = await signInWithPopup(auth, googleProvider);
const user = res.user;
const q = query(collection(db, "users"), where("uid", "==", user.uid));
const docs = await getDocs(q);
if (docs.docs.length === 0) {
await addDoc(collection(db, "users"), {
uid: user.uid,
name: user.displayName,
authProvider: "google",
email: user.email,
});
}
} catch (err) {
console.error(err);
alert(err.message);
}
};
const logInWithEmailAndPassword = async (email, password) => {
try {
await signInWithEmailAndPassword(auth, email, password);
} catch (err) {
console.error(err);
alert(err.message);
}
};
const registerWithEmailAndPassword = async (name, email, password) => {
try {
const res = await createUserWithEmailAndPassword(auth, email, password);
const user = res.user;
await addDoc(collection(db, "users"), {
uid: user.uid,
name,
authProvider: "local",
email,
});
} catch (err) {
console.error(err);
alert(err.message);
}
};
const sendPasswordReset = async (email) => {
try {
await sendPasswordResetEmail(auth, email);
alert("Password reset link sent!");
} catch (err) {
console.error(err);
alert(err.message);
}
};
const logout = () => {
signOut(auth);
};
export {
auth,
db,
signInWithGoogle,
logInWithEmailAndPassword,
registerWithEmailAndPassword,
sendPasswordReset,
logout,
};

Next, let’s work on the actual functionality.

Creating the login page

Create two new files to create a new component, Login.js and Login.css. I highly recommend installing ES7 snippets in Visual Studio Code so that you can just start typing rfce and press Enter to create a component boilerplate.

Now, let’s assign this component to a route. To do that, we need to configure React Router. Go to App.js and import the following:

import { BrowserRouter as Router, Route, Routes } from "react-router-dom";

Then, in the JSX part of App.js, add the following configuration to enable routing for our app:

<div className="app">
<Router>
<Switch>
<Route exact path="/" component={Login} />
</Switch>
</Router>
</div>

Remember to import the Login component on the top!

Go to Login.css and add the following styles. We won’t be focusing on styling much, so here are the styles for you to use:

.login {
height: 100vh;
width: 100vw;
display: flex;
align-items: center;
justify-content: center;
}
.login__container {
display: flex;
flex-direction: column;
text-align: center;
background-color: #dcdcdc;
padding: 30px;
}
.login__textBox {
padding: 10px;
font-size: 18px;
margin-bottom: 10px;
}
.login__btn {
padding: 10px;
font-size: 18px;
margin-bottom: 10px;
border: none;
color: white;
background-color: black;
}
.login__google {
background-color: #4285f4;
}
.login div {
margin-top: 7px;
}

Go to Login.js, and let’s look at how our login functionality works:

import React, { useEffect, useState } from "react";
import { Link, useNavigate } from "react-router-dom";
import { auth, signInWithEmailAndPassword, signInWithGoogle } from "./firebase";
import { useAuthState } from "react-firebase-hooks/auth";
import "./Login.css";
function Login() {
const [email, setEmail] = useState("");
const [password, setPassword] = useState("");
const [user, loading, error] = useAuthState(auth);
const navigate = useNavigate();
useEffect(() => {
if (loading) {
// maybe trigger a loading screen
return;
}
if (user) navigate("/dashboard");
}, [user, loading]);
return (
<div className="login">
<div className="login__container">
<input
type="text"
className="login__textBox"
value={email}
onChange={(e) => setEmail(e.target.value)}
placeholder="E-mail Address"
/>
<input
type="password"
className="login__textBox"
value={password}
onChange={(e) => setPassword(e.target.value)}
placeholder="Password"
/>
<button
className="login__btn"
onClick={() => signInWithEmailAndPassword(email, password)}
>
Login
</button>
<button className="login__btn login__google" onClick={signInWithGoogle}>
Login with Google
</button>
<div>
<Link to="/reset">Forgot Password</Link>
</div>
<div>
Don't have an account? <Link to="/register">Register</Link> now.
</div>
</div>
</div>
);
}
export default Login;

The above code might look long and hard to understand, but it’s really not. We have already covered the main authentication parts and now we are just implementing them in our layouts.

Here’s what’s happening in the above code block. We are using the functions we made in firebase.js for authentication. We are also using react-firebase-hooks along with useEffect to track the authentication state of the user. So, if the user gets authenticated, the user will automatically get redirected to the dashboard, which we are yet to make.

Here’s what you’ll see on your screen:

Basic Login Page React Firebase Create a new component called Register to handle user registrations. Here are the styles for Register.css:

.register {
height: 100vh;
width: 100vw;
display: flex;
align-items: center;
justify-content: center;
}
.register__container {
display: flex;
flex-direction: column;
text-align: center;
background-color: #dcdcdc;
padding: 30px;
}
.register__textBox {
padding: 10px;
font-size: 18px;
margin-bottom: 10px;
}
.register__btn {
padding: 10px;
font-size: 18px;
margin-bottom: 10px;
border: none;
color: white;
background-color: black;
}
.register__google {
background-color: #4285f4;
}
.register div {
margin-top: 7px;
}

After that, let’s look at how the register functionality is implemented in the layout. Use this layout in Register.js:

import React, { useEffect, useState } from "react";
import { useAuthState } from "react-firebase-hooks/auth";
import { Link, useHistory } from "react-router-dom";
import {
auth,
registerWithEmailAndPassword,
signInWithGoogle,
} from "./firebase";
import "./Register.css";
function Register() {
const [email, setEmail] = useState("");
const [password, setPassword] = useState("");
const [name, setName] = useState("");
const [user, loading, error] = useAuthState(auth);
const history = useHistory();
const register = () => {
if (!name) alert("Please enter name");
registerWithEmailAndPassword(name, email, password);
};
useEffect(() => {
if (loading) return;
if (user) history.replace("/dashboard");
}, [user, loading]);
return (
<div className="register">
<div className="register__container">
<input
type="text"
className="register__textBox"
value={name}
onChange={(e) => setName(e.target.value)}
placeholder="Full Name"
/>
<input
type="text"
className="register__textBox"
value={email}
onChange={(e) => setEmail(e.target.value)}
placeholder="E-mail Address"
/>
<input
type="password"
className="register__textBox"
value={password}
onChange={(e) => setPassword(e.target.value)}
placeholder="Password"
/>
<button className="register__btn" onClick={register}>
Register
</button>
<button
className="register__btn register__google"
onClick={signInWithGoogle}
>
Register with Google
</button>
<div>
Already have an account? <Link to="/">Login</Link> now.
</div>
</div>
</div>
);
}
export default Register;

Here, we are using similar approach as we used in the Login component. We are just using the functions we previously created in firebase.js. Again, here we are using useEffect along with react-firebase-hooks to keep track of user authentication status.

Let’s look at resetting passwords. Create a new component called Reset, and here’s the styling for Reset.css:

.reset {
height: 100vh;
width: 100vw;
display: flex;
align-items: center;
justify-content: center;
}
.reset__container {
display: flex;
flex-direction: column;
text-align: center;
background-color: #dcdcdc;
padding: 30px;
}
.reset__textBox {
padding: 10px;
font-size: 18px;
margin-bottom: 10px;
}
.reset__btn {
padding: 10px;
font-size: 18px;
margin-bottom: 10px;
border: none;
color: white;
background-color: black;
}
.reset div {
margin-top: 7px;
}

This is the layout for Reset.js:

import React, { useEffect, useState } from "react";
import { useAuthState } from "react-firebase-hooks/auth";
import { useNavigate } from "react-router-dom";
import { Link } from "react-router-dom";
import { auth, sendPasswordResetEmail } from "./firebase";
import "./Reset.css";
function Reset() {
const [email, setEmail] = useState("");
const [user, loading, error] = useAuthState(auth);
const navigate = useNavigate();
useEffect(() => {
if (loading) return;
if (user) navigate("/dashboard");
}, [user, loading]);
return (
<div className="reset">
<div className="reset__container">
<input
type="text"
className="reset__textBox"
value={email}
onChange={(e) => setEmail(e.target.value)}
placeholder="E-mail Address"
/>
<button
className="reset__btn"
onClick={() => sendPasswordResetEmail(email)}
>
Send password reset email
</button>
<div>
Don't have an account? <Link to="/register">Register</Link> now.
</div>
</div>
</div>
);
}
export default Reset;

This is similar to what we did for the Login and Register components. We are simply using the functions we created earlier.

Now, let’s focus on the dashboard. Create a new component called Dashboard, and here’s the styling for Dashboard.css:

.dashboard {
height: 100vh;
width: 100vw;
display: flex;
align-items: center;
justify-content: center;
}
.dashboard__container {
display: flex;
flex-direction: column;
text-align: center;
background-color: #dcdcdc;
padding: 30px;
}
.dashboard__btn {
padding: 10px;
font-size: 18px;
margin-top: 10px;
border: none;
color: white;
background-color: black;
}
.dashboard div {
margin-top: 7px;
}

And here’s the layout for Dashboard.js:

import React, { useEffect, useState } from "react";
import { useAuthState } from "react-firebase-hooks/auth";
import { useNavigate } from "react-router-dom";
import "./Dashboard.css";
import { auth, db, logout } from "./firebase";
import { query, collection, getDocs, where } from "firebase/firestore";
function Dashboard() {
const [user, loading, error] = useAuthState(auth);
const [name, setName] = useState("");
const navigate = useNavigate();
const fetchUserName = async () => {
try {
const q = query(collection(db, "users"), where("uid", "==", user?.uid));
const doc = await getDocs(q);
const data = doc.docs[0].data();
setName(data.name);
} catch (err) {
console.error(err);
alert("An error occured while fetching user data");
}
};
useEffect(() => {
if (loading) return;
if (!user) return navigate("/");
fetchUserName();
}, [user, loading]);
return (
<div className="dashboard">
<div className="dashboard__container">
Logged in as
<div>{name}</div>
<div>{user?.email}</div>
<button className="dashboard__btn" onClick={logout}>
Logout
</button>
</div>
</div>
);
}
export default Dashboard;

Unlike the other components, there’s something else we are doing here. We are checking the authentication state. If the user is not authenticated, we redirect the user to the login page.

We are also fetching our database and retrieving the name of the user based on the uid of the user. Finally, we are rendering out everything on the screen.

Lastly, let’s add everything to the router. Your App.js should look like this:

import "./App.css";
import { BrowserRouter as Router, Route, Routes } from "react-router-dom";
import Login from "./Login";
import Register from "./Register";
import Reset from "./Reset";
import Dashboard from "./Dashboard";
function App() {
return (
<div className="app">
<Router>
<Routes>
<Route exact path="/" element={<Login />} />
<Route exact path="/register" element={<Register />} />
<Route exact path="/reset" element={<Reset />} />
<Route exact path="/dashboard" element={<Dashboard />} />
</Routes>
</Router>
</div>
);
}
export default App;

The app is fully functional!

What's next?

Once you’re done with this build, I want you to play around with this. Try adding Facebook authentication next. What about GitHub authentication? I’d say keep experimenting with the code because that’s how you practice and learn things. If you just keep copying the code, you won’t understand the fundamentals of Firebase.