TutorialsMay 5, 20258 min read

Building a real-time AI autocomplete app with Next.js and Vercel AI SDK

Victor Mayowa

Fullstack Developer

Developer A sits at a desk working on an intermediate-level project.

Autocomplete enhances user experience by offering real-time suggestions as users type. In this tutorial, we’ll build a modern AI-powered autocomplete feature using Next.js and the Vercel AI SDK, powered by OpenAI’s GPT model. We’ll also integrate unit testing using Vitest and automate continuous integration (CI) with CircleCI.

The Vercel AI SDK is a powerful abstraction for interacting with Large Language Models (LLMs) in frontend or serverless environments. It supports multiple providers including OpenAI, Anthropic, Cohere, and Google’s Gemini. You can explore the full list of supported models in the official documentation.

This tutorial focuses on OpenAI’s model to generate smart and contextually relevant autocomplete suggestions in real-time, similar to how a search engine or code editor might work. To interact with OpenAI, you’ll need an API key from your OpenAI dashboard, which I’ll explain how to configure securely using environment variables.

By the end of this guide, you’ll have a fully functional AI autocomplete app with cleanly tested components and a CI/CD pipeline to ensure your code remains reliable and production-ready with every change.

Prerequisites

You will need the following to follow along with this tutorial:

Setting up your Next.js project

To begin, set up a new Next.js project with TypeScript and Tailwind CSS. Run the following command to create a project named vercel-ai-autocomplete:

npx create-next-app@latest vercel-ai-autocomplete --typescript --app --tailwind

Respond to the CLI prompts like this:

  • Would you like to use Tailwind CSS? Yes
  • Would you like to use the App Router? Yes
  • Would you like to customize the default import alias? No

Then, navigate into the project and install the required dependencies:

cd vercel-ai-autocomplete

npm install lodash dotenv ai @ai-sdk/openai
npm install -D vitest @testing-library/react @testing-library/jest-dom @testing-library/user-event jsdom @vitejs/plugin-react @types/lodash

This will install the following dependencies:

  • lodash: A utility library for JavaScript that provides helpful functions for manipulating arrays, objects, and other data types.
  • dotenv: A zero-dependency module that loads environment variables from a .env file into process.env.
  • ai: The core Vercel AI SDK that provides abstractions for interacting with LLMs.
  • @ai-sdk/openai: The OpenAI integration for the Vercel AI SDK.
  • vitest, @testing-library/*: For writing and running unit tests.

After installing the dependencies, start the development server with:

npm run dev

This will start the Next.js development server, and you can view your app at http://localhost:3000.

Next.js development server

Configure Vitest

To set up Vitest for testing, create a vitest.config.ts file in the root of your project. This file will contain the configuration for Vitest. Open the file and add the following code:

import { defineConfig } from "vitest/config";
import react from "@vitejs/plugin-react";

export default defineConfig({
  plugins: [react()],
  test: {
    globals: true,
    environment: "jsdom",
    setupFiles: "./vitest.setup.ts",
  },
});

Next, create vitest.setup.ts:

import "@testing-library/jest-dom";

This file will set up the testing environment and import the necessary libraries. And lastly, open package.json file and add the following script to run the tests:

"scripts": {
  "test": "vitest"
}

This will allow you to run the tests using the command npm run test.

Implementing the autocomplete feature

In this section, we’ll implement the client-side logic for your AI-powered autocomplete feature. The input component will send the user’s typed text to a serverless API route, which will in turn query OpenAI’s GPT model through the Vercel AI SDK.

As noted earlier, the Vercel AI SDK provides a unified interface for interacting with various large language models (LLMs), including OpenAI, Anthropic, and Cohere. For this tutorial, we’ll be using OpenAI’s GPT-3.5 model to generate intelligent autocomplete suggestions based on partial input.

To get started, open the src/app/page.tsx file and replace its content with the following code:

"use client";

import { useState, useCallback } from "react";
import debounce from "lodash/debounce";

export default function Page() {
  const [input, setInput] = useState("");
  const [suggestion, setSuggestion] = useState("");
  const [loading, setLoading] = useState(false);
  const [error, setError] = useState("");

  const fetchSuggestion = useCallback(
    debounce(async (text: string) => {
      setLoading(true);
      setError("");
      try {
        const res = await fetch("/api", {
          method: "POST",
          headers: { "Content-Type": "application/json" },
          body: JSON.stringify({ prompt: text }),
        });

        const data = await res.json();

        if (!res.ok) {
          setSuggestion("");
          setError(data.completion || "Failed to fetch suggestion");
        } else {
          const cleaned = (data.completion || "").trim();
          setSuggestion(cleaned);
        }
      } catch (err) {
        console.error("Error fetching suggestion:", err);
        setError("Something went wrong.");
        setSuggestion("");
      }
      setLoading(false);
    }, 400),
    []
  );

  const handleChange = (e: React.ChangeEvent<HTMLInputElement>) => {
    const val = e.target.value;
    setInput(val);
    if (val.trim()) fetchSuggestion(val);
    else setSuggestion("");
  };

  const handleSuggestionClick = () => {
    setInput(suggestion);
    setSuggestion("");
  };

  return (
    <div className="space-y-4 p-4 max-w-xl mx-auto relative">
      <input
        type="text"
        value={input}
        onChange={handleChange}
        placeholder="Start typing..."
        className="w-full border p-2 rounded"
      />

      {suggestion && !error && (
        <div
          onClick={handleSuggestionClick}
          className="absolute left-4 right-4 top-16 bg-white border border-gray-300 rounded shadow-md p-2 cursor-pointer hover:bg-gray-100 z-10 text-gray-900"
        >
          {suggestion}
        </div>
      )}

      {loading && <p className="text-gray-500 text-sm">Thinking...</p>}
      {error && <p className="text-red-500 text-sm">{error}</p>}
    </div>
  );
}

This component uses the useState and useCallback hooks with debounce from lodash to fetch suggestions from the server as the user types. Ressults are displayed in a dropdown below the input box in real-time. The fetchSuggestion function is debounced to avoid making too many requests to the server to improve performance while rate-limiting or errors are handled gracefully.

Next, you will create the API route to handle the autocomplete requests. For that, create a new directory called api inside the src/app directory, and then create a file named route.ts inside the api directory.

The directory structure should look like this:

src
├── app
│   ├── api
│   │   └── route.ts
│   ├── page.tsx
│   └── ...

The route.ts file will handle the POST requests from the client component. Open src/app/api/route.ts and add the following code:

import { NextRequest, NextResponse } from "next/server";
import { generateAutocomplete } from "../../lib/autocomplete";

export async function POST(req: NextRequest) {
  const { prompt } = await req.json();

  if (!prompt) {
    return NextResponse.json({ completion: "Missing prompt" }, { status: 400 });
  }

  try {
    const completion = await generateAutocomplete(prompt);
    return NextResponse.json({ completion });
  } catch (err) {
    console.error("Autocomplete API error:", err);
    return NextResponse.json({ error: "Server error" }, { status: 500 });
  }
}

This file defines a POST handler that powers the autocomplete functionality by interacting with OpenAI through the Vercel AI SDK. When a request is made with a JSON payload containing a prompt, the handler first checks if the prompt is present. If it’s missing, it responds with a 400 Bad Request and a helpful message.

If the prompt is provided, it calls the generateAutocomplete helper function, which uses OpenAI’s GPT-3.5 model to generate an intelligent and contextually relevant suggestion. This function is designed to simulate how an autocomplete engine might behave, returning realistic completions rather than simply predicting word fragments.

If the OpenAI API returns a result successfully, the suggestion is sent back to the client in a JSON response. However, if something goes wrong during the process, such as a network issue or model error — the handler logs the error and returns a 500 Internal Server Error to inform the client of the failure.

To implement the generateAutocomplete function, create a new directory called lib inside the src directory. Then, create a file named autocomplete.ts to hold the logic for generating suggestions using OpenAI:

import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

export async function generateAutocomplete(prompt: string): Promise<string> {
  const examples = `
You are an intelligent autocomplete engine. Given a partial word or phrase, return a smart suggestion that completes it meaningfully.

Examples:
"how to ma" → "ke an HTTP request"
"react us" → "eState example"
"newsl" → "etter ideas"
"circl" → "eCI config file"
"git che" → "ckout new branch"
`;

  const result = await generateText({
    model: openai.responses("gpt-3.5-turbo"),
    prompt: `${examples}\n\nNow complete: "${prompt}"`,
  });

  return result.text.trim();
}

This function formats a few-shot prompt to steer the model’s response and uses generateText to fetch a completion from OpenAI. It returns the trimmed result as the suggestion.

The Vercel AI SDK automatically reads the OPENAI_API_KEY from your environment variables, so you don’t need to manually pass it into the openai configuration.

Creating environment variables

Create a .env.local file in the root of your project and add your OpenAI API key:

OPENAI_API_KEY=your_api_key_here

Make sure to replace your_api_key_here with your actual API key.

You can get your API key from the OpenAI dashboard. After signing in, navigate to the API keys section and create a new key if you don’t have one already.

Note: Be sure to add .env.local to your .gitignore file to prevent it from being pushed to your repository.

Testing the autocomplete feature

Run the development server with the following command:

npm run dev

Open your browser and navigate to http://localhost:3000. Start typing into the input field and watch as real-time suggestions appear below it

Autocomplete feature

Each keystroke triggers a request to the API route powered by OpenAI’s GPT-3.5 model via the Vercel AI SDK. Based on the partial input, the model generates a smart completion and displays it in the UI. If anything goes wrong during the request (e.g., an invalid API key), an error message will be shown, but for most common use cases, you won’t hit a quota limit unless your usage exceeds OpenAI’s free tier or billing setup.

Writing unit tests

Now that you have a working autocomplete feature, you can write the unit tests to ensure everything is functioning correctly.

Create a __tests__ directory in the root of your project and create two files: page.test.tsx and route.test.ts within it. These files will contain your unit tests for the client and server components, respectively.

Starting with the client component, open __tests__/page.test.tsx and add the following code:

import { vi } from "vitest";
import { render, screen, fireEvent, waitFor } from "@testing-library/react";
import Page from "../src/app/page";

global.fetch = vi.fn(() =>
  Promise.resolve({
    ok: true,
    json: () => Promise.resolve({ completion: "Next.js example" }),
  })
) as unknown as typeof fetch;

describe("Autocomplete Page", () => {
  it("displays suggestions as user types", async () => {
    render(<Page />);
    const input = screen.getByPlaceholderText("Start typing...");
    fireEvent.change(input, { target: { value: "Next.js" } });

    await waitFor(() => {
      expect(screen.getByText("Next.js example")).toBeInTheDocument();
    });
  });
});

This unit test checks that when a user types into the autocomplete input, a mocked API response is fetched and the suggestion (“Next.js example”) is displayed in the UI. It mocks fetch, simulates typing, and uses waitFor to assert the expected output is rendered.

Next, open __tests__/route.test.ts and add the following code:

import { POST } from "../src/app/api/route";
import { NextRequest } from "next/server";

vi.mock("../src/lib/autocomplete", () => ({
  generateAutocomplete: vi.fn(() =>
    Promise.resolve("mocked autocomplete response")
  ),
}));

function mockRequest(body: { prompt?: string }): NextRequest {
  return { json: async () => body } as NextRequest;
}

describe("POST /api", () => {
  it("returns a suggestion for valid prompt", async () => {
    const req = mockRequest({ prompt: "react us" });
    const res = await POST(req);
    const data = await res.json();
    expect(res.status).toBe(200);
    expect(data.completion).toContain("mocked autocomplete response");
  });

  it("returns 400 for missing prompt", async () => {
    const req = mockRequest({});
    const res = await POST(req);
    expect(res.status).toBe(400);
  });
});

This unit test verifies the server-side POST handler for the autocomplete API. It mocks the generateAutocomplete function to simulate a response from OpenAI. The first test checks that a valid prompt returns a 200 status with the mocked completion, while the second test ensures that a missing prompt results in a 400 Bad Request.

Before you run the test, update the tsconfig.json file to include the types for Vitest:

{
  "compilerOptions": {
    "types": ["vitest/globals"]
  }
}

Now run the tests with the following command:

npm run test

You should see output indicating that all tests have passed:

 ✓ __tests__/route.test.ts (2 tests) 4ms
 ✓ __tests__/page.test.tsx (1 test) 427ms
   ✓ Autocomplete Page > displays suggestions as user types  426ms

 Test Files  2 passed (2)
      Tests  3 passed (3)
   Start at  23:28:29
   Duration  1.19s (transform 55ms, setup 216ms, collect 165ms, tests 431ms, environment 718ms, prepare 88ms)

Integrating with CircleCI

Your next step is to set up CircleCI to run tests automatically on every push to the repository. To do this, you will need to create a CircleCI configuration file in your project. CircleCI uses a YAML file to define the build and test process. Create a .circleci directory in the root of your project and create a config.yml file inside it. Use this configuration:

version: 2.1

jobs:
  test:
    docker:
      - image: cimg/node:23.11.0
    steps:
      - checkout
      - run:
          name: Install dependencies
          command: npm ci
      - run:
          name: Run tests
          command: npm run test

workflows:
  test:
    jobs:
      - test

This CircleCI config defines a pipeline that runs tests in a Node.js 23 environment using the official cimg/node Docker image. It checks out the repository, installs dependencies with npm ci, and runs tests using npm run test whenever the workflow is triggered.

Save all changes and push the code to your GitHub repository.

Log in to CircleCI dashboard and navigate to your project. You may need to add your GitHub account to CircleCI if you haven’t done so already. Click Set Up Project next to your repository vercel-ai-autocomplete.

Search for your project

You will be prompted to select the branch housing your configuration file. Enter main and click Set Up Project.

Set up project

This will trigger your pipeline and run the tests successfully.

CircleCI pipeline successful

Conclusion

In this tutorial, you built an AI-powered autocomplete feature using Next.js and the Vercel AI SDK, with OpenAI as the LLM provider. You also implemented unit tests for both client and server components using Vitest, and integrated continuous testing with CircleCI.

This setup provides a strong foundation for building reliable, real-time AI features, making it easier to catch regressions early and ship confidently in production environments.

Copy to clipboard