Implementing image recognition with React and continuous deployment

Fullstack Developer

Integrating artificial intelligence (AI) into web applications can significantly enhance user experience. AI offers features like image recognition to process and analyze user-uploaded images. Combining this with a robust continuous integration and continuous deployment (CI/CD) pipeline using CircleCI ensures seamless updates and reliable delivery.
In this article, you will learn how to build a React app that uses TensorFlow.js for client-side image recognition and set up automated testing with CircleCI. Users will be able to upload images via a simple file input, and the app will process them using a pre-trained model.
By the end of this guide, you’ll have a fully functional image recognition app with cleanly tested components and a CircleCI pipeline to ensure your code remains reliable and production-ready with every change.
Prerequisites
To follow along with this tutorial you will need:
- A GitHub account
- A CircleCI account
- Node.js (v16 or higher) installed on your machine
- Basic knowledge of JavaScript and React
Setting up your React.js project
In this part of the tutorial, you will set up a new React.js project using Vite and style it with Tailwind CSS. You will install the necessary dependencies for image recognition: TensorFlow.js and the COCO-SSD model.
Start by creating a new React project using Vite:
npx create-vite@latest image-recognition-app --template react
Then go to the newly created project directory:
cd image-recognition-app
Install Tailwind CSS and the Vite plugin:
npm install tailwindcss @tailwindcss/vite
To configure the Vite plugin, open the vite.config.js
file and replace the existing code with this:
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
import tailwindcss from '@tailwindcss/vite';
export default defineConfig({
plugins: [
react(),
tailwindcss(),
],
});
Next, import Tailwind CSS by replacing the code in your src/index.css
file with this import statement:
@import "tailwindcss";
Start the development server:
npm run dev
This will start the Vite development server, and you can view your app at http://localhost:5173/
.
In this section, you will implement the core logic for the image recognition feature using React. Users will upload an image, which will then be processed using the COCO-SSD model from TensorFlow.js to detect and label the objects in the image.
TensorFlow.js offers several pre-trained models for computer vision tasks. In this guide, you will use COCO-SSD, which can identify and localize multiple objects within an image, unlike MobileNet, which only classifies the entire image.
Start by installing the required dependencies for browser-based object detection:
npm install @tensorflow/tfjs @tensorflow-models/coco-ssd
To keep things organized, create a components
folder inside the src
directory, and add a new file called ImageRecognition.jsx
with this code:
import React, { useRef, useState, useEffect } from "react";
import * as cocoSsd from "@tensorflow-models/coco-ssd";
import "@tensorflow/tfjs";
const ImageRecognition = () => {
const [image, setImage] = useState(null);
const [predictions, setPredictions] = useState([]);
const [model, setModel] = useState(null);
const canvasRef = useRef(null);
const imageRef = useRef(null);
// Load the coco-ssd model on component mount
useEffect(() => {
async function loadModel() {
const loadedModel = await cocoSsd.load();
setModel(loadedModel);
}
loadModel();
}, []);
// Handle image upload
const handleImageUpload = (event) => {
const file = event.target.files[0];
if (file) {
const imageUrl = URL.createObjectURL(file);
setImage(imageUrl);
}
};
// Run object detection when an image is loaded
useEffect(() => {
if (image && model) {
const img = imageRef.current;
img.onload = async () => {
const preds = await model.detect(img);
setPredictions(preds);
drawBoundingBoxes(preds);
};
}
}, [image, model]);
// Draw bounding boxes on the canvas
const drawBoundingBoxes = (predictions) => {
const canvas = canvasRef.current;
const ctx = canvas.getContext("2d");
const img = imageRef.current;
canvas.width = img.width;
canvas.height = img.height;
ctx.clearRect(0, 0, canvas.width, canvas.height);
predictions.forEach((prediction) => {
const [x, y, width, height] = prediction.bbox;
ctx.strokeStyle = "red";
ctx.lineWidth = 2;
ctx.strokeRect(x, y, width, height);
ctx.font = "18px Arial";
ctx.fillStyle = "red";
ctx.fillText(
`${prediction.class} (${Math.round(prediction.score * 100)}%)`,
x,
y > 10 ? y - 5 : y + 15
);
});
};
return (
<div className="container mx-auto p-6 max-w-3xl bg-white rounded-lg shadow-lg">
<h1 className="text-3xl font-bold text-gray-800 mb-6 text-center">
AI-Powered Image Recognition
</h1>
<div className="mb-6">
<label
htmlFor="imageUpload"
className="inline-block px-6 py-3 bg-blue-600 text-white text-sm font-medium rounded-lg shadow hover:bg-blue-700 transition cursor-pointer"
>
Upload an Image
</label>
<input
id="imageUpload"
type="file"
accept="image/*"
onChange={handleImageUpload}
className="hidden"
/>
</div>
{image && (
<div className="relative mb-6">
<img
src={image}
ref={imageRef}
alt="Uploaded"
className="max-w-full h-auto rounded-lg shadow-md"
/>
<canvas
ref={canvasRef}
className="absolute top-0 left-0"
style={{ pointerEvents: "none" }}
/>
</div>
)}
{predictions.length > 0 && (
<div className="bg-gray-50 p-4 rounded-lg shadow-inner">
<h2 className="text-xl font-semibold text-gray-800 mb-3">
Detected Objects
</h2>
<ul className="list-disc pl-5 space-y-2">
{predictions.map((pred, index) => (
<li key={index} className="text-gray-700">
<span className="font-medium">{pred.class}</span> - Confidence:{" "}
{Math.round(pred.score * 100)}%
</li>
))}
</ul>
</div>
)}
</div>
);
};
export default ImageRecognition;
This component handles the entire image recognition workflow using React hooks. It loads the COCO-SSD model once when the component mounts using useEffect
, and allows users to upload an image through a styled file input. When an image is uploaded, it’s displayed in the UI and passed to the model for object detection. The predictions returned from the model are stored in state and used to draw bounding boxes on a <canvas>
overlaid on the image. These boxes show each detected object’s label and confidence score. The component also lists all the detected objects below the image for quick reference.
To see the image recognition feature in action, import the component into the root component by replacing the code in the src/App.jsx
file with this:
import ImageRecognition from './components/ImageRecognition';
export default function App() {
return (
<ImageRecognition />
);
}
If your development server is not running, start it with npm run dev
, go to http://localhost:5173
on your browser. Click the Upload an Image button to select an image file. Once it’s uploaded, the model will analyze the image and highlight any detected objects. The results will be shown both as bounding boxes on the image and as a list below it.
Configuring Jest
Before writing unit tests for your image recognition feature, you need to set up Jest with the right environment and tools for testing React components and browser APIs like canvas
.
Start by installing the required dependencies:
npm install --save-dev jest jest-environment-jsdom @testing-library/react @testing-library/jest-dom @babel/preset-env @babel/preset-react babel-jest @babel/core
You will use jest
as the test runner, @testing-library/react
for rendering and querying components, and @testing-library/jest-dom
for custom matchers like toBeInTheDocument
. Babel is needed to transpile modern JavaScript and JSX, while identity-obj-proxy
lets you mock CSS modules during testing.
Next, create a .babelrc
file in your project root to configure Babel:
{
"presets": ["@babel/preset-env", "@babel/preset-react"]
}
This ensures Jest can understand modern JavaScript and JSX syntax.
Then, configure Jest by creating a jest.config.js
file in the root of your project with this code:
export default {
testEnvironment: "jest-environment-jsdom",
setupFilesAfterEnv: ["<rootDir>/setupTests.js"],
transform: {
"^.+\\.jsx?$": "babel-jest",
},
moduleNameMapper: {
"\\.(css|less|scss|sass)$": "identity-obj-proxy",
},
};
This tells Jest to use the jsdom
environment, apply Babel transforms, and mock CSS modules using identity-obj-proxy
.
Next, create a setupTests.js
file in the project root folder to configure global test behavior:
import "@testing-library/jest-dom";
// Mock the 2D canvas context to avoid errors during tests
HTMLCanvasElement.prototype.getContext = () => {
return {
clearRect: jest.fn(),
strokeRect: jest.fn(),
beginPath: jest.fn(),
fillText: jest.fn(),
font: "",
lineWidth: 0,
strokeStyle: "",
};
};
With Jest now configured, you’re ready to write tests for your image recognition component.
Open the package.json
file and go to the scripts
section. You will run the tests using the command npm run test
:
"test": "jest"
Testing the image recognition feature
With Jest configured, you can now write unit tests for the ImageRecognition component to validate key behaviors such as rendering, file uploads, model predictions, and UI updates.
Start by creating a __tests__
directory at the root of your project, and inside it, add a new file named image-recognition.test.jsx
with this code:
import React from "react";
import { render, screen, fireEvent, waitFor, act } from "@testing-library/react";
import ImageRecognition from "../src/components/ImageRecognition";
// Mock TensorFlow model
const mockDetect = jest.fn();
const mockLoad = jest.fn().mockResolvedValue({ detect: mockDetect });
jest.mock("@tensorflow-models/coco-ssd", () => ({
load: () => mockLoad(),
}));
jest.mock("@tensorflow/tfjs", () => ({}));
// Stub createObjectURL globally
global.URL.createObjectURL = jest.fn(() => "mock-url");
describe("ImageRecognition Component", () => {
beforeEach(() => {
mockDetect.mockReset();
});
test("renders main heading", async () => {
await act(async () => {
render(<ImageRecognition />);
});
expect(
screen.getByRole("heading", {
name: /AI-Powered Image Recognition/i,
})
).toBeInTheDocument();
});
test("renders file input with label", async () => {
await act(async () => {
render(<ImageRecognition />);
});
const input = screen.getByLabelText(/Upload an Image/i);
expect(input).toBeInTheDocument();
expect(input).toHaveAttribute("type", "file");
});
test("loads model on mount", async () => {
await act(async () => {
render(<ImageRecognition />);
});
expect(mockLoad).toHaveBeenCalled();
});
test("displays uploaded image", async () => {
await act(async () => {
render(<ImageRecognition />);
});
const fileInput = screen.getByLabelText(/Upload an Image/i);
const file = new File(["dummy"], "image.png", { type: "image/png" });
fireEvent.change(fileInput, { target: { files: [file] } });
await waitFor(() => {
expect(screen.getByAltText("Uploaded")).toHaveAttribute("src", "mock-url");
});
});
test("renders predictions and detected objects list", async () => {
mockDetect.mockResolvedValueOnce([
{ class: "car", score: 0.9, bbox: [10, 10, 100, 100] },
{ class: "dog", score: 0.8, bbox: [120, 50, 150, 150] },
]);
await act(async () => {
render(<ImageRecognition />);
});
const fileInput = screen.getByLabelText(/Upload an Image/i);
const file = new File(["dummy"], "image.png", { type: "image/png" });
fireEvent.change(fileInput, { target: { files: [file] } });
const img = await screen.findByAltText("Uploaded");
Object.defineProperty(img, "width", { value: 400 });
Object.defineProperty(img, "height", { value: 300 });
act(() => {
img.onload();
});
await waitFor(() => {
expect(screen.getByText(/Detected Objects/i)).toBeInTheDocument();
expect(screen.getByText(/car/i)).toBeInTheDocument();
expect(screen.getByText(/dog/i)).toBeInTheDocument();
});
});
test("handles case with no predictions", async () => {
mockDetect.mockResolvedValueOnce([]);
await act(async () => {
render(<ImageRecognition />);
});
const fileInput = screen.getByLabelText(/Upload an Image/i);
const file = new File(["dummy"], "image.png", { type: "image/png" });
fireEvent.change(fileInput, { target: { files: [file] } });
await waitFor(() => {
expect(screen.queryByText(/Detected Objects/i)).not.toBeInTheDocument();
});
});
test("does not crash if no file is uploaded", async () => {
await act(async () => {
render(<ImageRecognition />);
});
const input = screen.getByLabelText(/Upload an Image/i);
fireEvent.change(input, { target: { files: [] } });
expect(screen.queryByAltText("Uploaded")).not.toBeInTheDocument();
});
test("canvas element is present and positioned", async () => {
mockDetect.mockResolvedValueOnce([{ class: "cat", score: 0.95, bbox: [1, 1, 50, 50] }]);
const { container } = render(<ImageRecognition />);
const fileInput = screen.getByLabelText(/Upload an Image/i);
const file = new File(["dummy"], "image.png", { type: "image/png" });
fireEvent.change(fileInput, { target: { files: [file] } });
await waitFor(() => {
const canvas = container.querySelector("canvas");
expect(canvas).toBeInTheDocument();
expect(canvas.style.pointerEvents).toBe("none");
});
});
});
These tests confirm the core functionality of your image recognition feature, from model loading and image upload to prediction display and UI updates.
Before running the tests, update the eslint.config.js
file to prevent linter errors related to undefined globals. This ensure ESLint recognizes the testing environment by extending the appropriate config or explicitly defining the test globals. This will allow the test file to pass linting without throwing warnings or errors.
{
overrides: [
{
files: ["__tests__/**/*"],
env: {
jest: true,
},
},
],
},
Now run the tests:
npm run test
Output should show that all tests have passed:
> image-recognition-app@0.0.0 test
> jest
PASS __tests__/image-recognition.test.jsx
ImageRecognition Component
✓ renders main heading (98 ms)
✓ renders file input with label (10 ms)
✓ loads model on mount (8 ms)
✓ displays uploaded image (17 ms)
✓ renders predictions and detected objects list (26 ms)
✓ handles case with no predictions (10 ms)
✓ does not crash if no file is uploaded (4 ms)
✓ canvas element is present and positioned (8 ms)
Test Suites: 1 passed, 1 total
Tests: 8 passed, 8 total
Snapshots: 0 total
Time: 1.714 s, estimated 2 s
Ran all test suites.
Integrating with CircleCI
To automate testing, you will set up CircleCI to run your tests automatically on every push to your GitHub repository. CircleCI uses a YAML configuration file to define build steps.
Start by creating a .circleci
directory in the root of your project. Create a config.yml
file in it and add this content:
version: 2.1
jobs:
test:
docker:
- image: cimg/node:23.11.0
steps:
- checkout
- run:
name: Install dependencies
command: npm ci
- run:
name: Run tests
command: npm run test
workflows:
test:
jobs:
- test
This configuration defines a simple workflow that runs inside the official cimg/node:23.11.0
Docker image. It checks out your code, installs dependencies using npm ci
, and runs the tests using npm run test
whenever the workflow is triggered.
Upload your project to GitHub, then connect it to CircleCI by creating a new project and selecting your repository.
Once set up, you can trigger the pipeline manually from the CircleCI dashboard. You should see the pipeline execute successfully:
You can access the full code on GitHub.
Conclusion
In this tutorial, you integrated TensorFlow.js into a React app for in-browser image recognition and set up automated testing with CircleCI. This setup allows users to upload images and get real-time object detection. Meanwhile, your tests run automatically with every push, keeping your codebase reliable and production-ready.