Building and deploying a Python MCP server with FastMCP and CircleCI
Full Stack Engineer

Extending Large Language Models (LLMs) with custom tools has become increasingly valuable in today’s AI landscape. Model Context Protocol (MCP) servers provide a standardized way to connect external tools and resources to LLMs. This can enhance their capabilities beyond basic text generation.
While thousands of pre-built MCP servers exist, creating your own allows you to address specific workflows. You can implement use cases that off-the-shelf solutions cannot handle. This is where the real power lies.
In this tutorial, you will learn to build a document parsing server that enables MCP hosts to understand various file formats. You will use FastMCP—the leading library for building MCPs in Python. You will also bundle your application into a Python package and publish on PyPI using an automated CI/CD pipeline.
By the end, you will have a fully functional document reader MCP server that extracts text from various document formats.
Here is a demo of what you will build.
Prerequisites
Before diving in, make sure you have:
- Python 3.10+ or newer installed on your system
- Basic understanding of Python packaging concepts
- A PyPI account for publishing your package
- A GitHub account for version control
- A CircleCI account connected to your GitHub account for creating CI/CD pipelines
- Claude Desktop for using your MCP server
Setting up the development environment
Building an MCP server opens up exciting possibilities for extending LLM capabilities with custom tools and resources. FastMCP simplifies this process with a clean API using decorators that handle server protocol complexities.
You can visit this GitHub repository to explore the code for the server you are about to build.
Initializing the project directory
First, let us create a new directory for our project and set up the basic structure:
mkdir document-brain
cd document-brain
Installing UV
For dependency management, you will use uv
—a modern, fast package manager for Python. Open powershell and run this command to install uv
on Windows:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
Restart your terminal completely. Then verify the installation:
uv --version
You can also install uv
using pip:
# Using pip
pip install uv
On Unix-based systems like macOS run:
```bash
curl -sSf https://install.python-uv.org | bash
# Or using Homebrew
brew install uv
Now initialize your project with uv
:
uv init
This command automatically creates:
.python-version
: Ensures consistent Python versions across environments.gitignore
: Lists files and directories for Git to ignore.main.py
: A starting point for development.pyproject.toml
: Defines project metadata and dependencies.README.md
: An overview of the project’s purpose, usage instructions, and other relevant information.
These files collectively set up a foundational structure for a Python project. This facilitates development and collaboration.
Creating a virtual environment and installing dependencies
When installing packages and dependencies, uv
automatically creates a virtual environment name .venv
.
Install the necessary packages for developing and testing your MCP server. Run:
uv add "mcp[cli]"
uv add "markitdown[all]"
uv add --dev pytest build twine
These packages provide:
- FastMCP with CLI tools for development and debugging
- Markitdown for document parsing functionality
- Testing (pytest)
- Building your package (build)
- Publishing to PyPI (twine)
Before creating your MCP server, create the project directory structure:
mkdir -p src/document_brain
touch src/document_brain/__init__.py
touch src/document_brain/server.py
Your project structure looks like this:
project_root/
├── src/
│ └── document_brain/
│ ├── __init__.py # Initializes server
│ └── server.py # Contains the 'mcp' instance and definitions
├── tests/
│ ├── __init__.py # Makes tests discoverable
│ └── test_server.py # Tests for your MCP server
├── main.py
├── pyproject.toml # Project configuration
└── README.md # Project documentation
This structure follows the recommended Python packaging standards with src-layout. The approach prevents import issues during development.
Building the MCP Server with FastMCP
In this part, you will begin by understanding three primary components of MCP servers (tools, resources, and prompts).
Understanding the core components
MCP servers consists of three key components:
- Tools: Model-controlled functions that LLMs can call to perform actions or interact with external systems.
- Resources: Application-controlled data sources that inject contextual information from your systems into the conversation.
- Prompts: User-controlled templates that can be invoked through UI elements to help users interact with the LLM in structured ways.
Now, let us implement your document brain server in src/document_brain/server.py
:
from mcp.server.fastmcp import FastMCP
from mcp.server.fastmcp.prompts import base
from mcp.server.fastmcp.resources import DirectoryResource
from pathlib import Path
import os
from markitdown import MarkItDown
md = MarkItDown()
# Initialize the FastMCP server
mcp = FastMCP("DocumentBrain", dependencies=["markitdown[all]"])
@mcp.tool(
annotations={
"title": "Read Any Document",
"readOnlyHint": True,
"openWorldHint": False
}
)
def read_any_document(file_path: str) -> str:
"""Read any supported document and return its text content, including OCR for images.
Args:
file_path: Path to the document to process.
Returns:
Extracted text content as a string.
"""
try:
expanded_path = os.path.expanduser(file_path)
return md.convert(expanded_path).text_content
except Exception as e:
return f"Error reading file: {str(e)}"
@mcp.tool(
annotations={
"title": "Save File to PC",
"readOnlyHint": False,
"openWorldHint": True
}
)
def save_file_to_pc(filepath: str, content: str) -> str:
"""
Save content to a file on the desktop.
Args:
filename: Name of the file to save (can include subdirectory)
content: Content to write to the file
Returns:
A success or error message
"""
try:
# Expand the desktop path
desktop_path = os.path.expanduser(filepath)
# Ensure the filename doesn't contain any path traversal attempts
safe_filename = os.path.basename(filepath)
# Create the full file path
full_path = os.path.join(desktop_path, safe_filename)
# Ensure the directory exists
os.makedirs(os.path.dirname(full_path), exist_ok=True)
# Write the content to the file
with open(full_path, 'w', encoding='utf-8') as f:
f.write(content)
return f"File successfully saved to {full_path}"
except Exception as e:
return f"Error saving file: {str(e)}"
# Now add a resource
# Define the path to the current directory
documents_path = Path(".").resolve()
# Create a DirectoryResource to list files in the current directory
documents_resource = DirectoryResource(
uri="docs://files",
path=documents_path,
name="Local Document Directory",
description="Lists all files in the current working directory.",
recursive=False # Set to True if you want to include subdirectories
)
# Add the resource to your FastMCP server
mcp.add_resource(documents_resource)
@mcp.resource("docs://file/{filename}")
def get_document_content(filename: str) -> str:
"""Retrieve the content of a specified document."""
try:
file_path = documents_path / filename
if not file_path.exists():
return f"File not found: {filename}"
return md.convert(str(file_path)).text_content
except Exception as e:
return f"Error reading file {filename}: {str(e)}"
# Prompt: Summarize document
@mcp.prompt()
def analyze_data(text: str) -> list[base.Message]:
"""Prompt to generate a summary of the provided document text.
Args:
text: The content of the document to be summarized.
Returns:
A list of messages guiding the LLM to produce a summary.
"""
return [
base.Message(
role="user",
content=[
base.TextContent(
text=f"Assume the role of a data analyst specializing in academic research. \
Your task is to critically analyze the data presented in the file of the attached academic document. \
Start by summarizing the key data points and notable findings. Identify any patterns, trends, correlations, or anomalies within the dataset.:\n\n{text}"
)
]
)
]
def main():
"""Entry point for the MCP server."""
mcp.run()
if __name__ == "__main__":
main()
Here is a breakdown what you’ve created:
- Tools: Two functions decorated with
@mcp.tool()
. Theread_any_document
extracts text from documents andsave_file_to_pc
saves files to pc. - Resources: A dynamic resource that provides access to documents in the current directory and a function to retrieve document content.
- Prompts: A prompt that users can invoke to analyze document content.
- Main function: The
main()
function serves as an entry point for running the server.
Now, update the src/document_brain/__init__.py
file to expose server components:
"""Document Reader MCP server for extracting text from various document formats."""
from .server import mcp, read_any_document, main
Testing the MCP Server
In this part you will run your server and test what it can do. Before testing your MCP server, activate your virtual environment:
.venv\Scripts\activate # On Unix-based systesms: source .venv/bin/activate
Now proceed to test the MCP server using MCP inspector. Make sure you are connected to the internet.
Manual testing with the MCP Inspector
FastMCP includes a built-in debugging tool called the MCP Inspector. To test your server:
mcp dev src/document_brain/server.py
This starts the server and opens the Inspector in your browser (typically at http://127.0.0.1:6274
). Click Connect and explore the Tools, Resources, and Prompts tabs to test your implementation.
You can verify that your server is running.
In the Tools tab, there are two tools:
read_any_document
save_file_to_pc
.
You can click on any tool and test it. For example, clicking on the read_any_document
tool will show an input field for the file path. Enter the full path to any supported file (such as Excel) on your machine. Click Run Tool
and the tool will convert the Excel file to markdown, displying the extracted text content.
You can also test your MCP Server in Claude Desktop by running:
mcp install src/document_brain/server.py
Restart Claude Desktop and the MCP server will be attached. See screenshot below.
Now ask Claude to analyze data in an Excel file global_inflation_data.xlsx
, saved locally.
Your MCP Server is fully operational in Claude Desktop.
Setting up automated testing
To set up a simple test for our MCP server, create a test directory:
mkdir -p tests
touch tests/test_server.py
touch tests/__init__.py
Now, add basic tests in tests/test_server.py
:
import pytest
from src.document_brain.server import read_any_document
# Fixture to create a temporary text file
@pytest.fixture
def temp_text_file(tmp_path):
file_path = tmp_path / "test_document.txt"
file_path.write_text("This is a test document.")
return file_path
# Test reading a valid text file
def test_read_valid_document(temp_text_file):
content = read_any_document(str(temp_text_file))
assert "This is a test document." in content
# Test reading a non-existent file
def test_read_nonexistent_file():
content = read_any_document("nonexistent_file.txt")
assert "Error reading file" in content
These tests verify that our document reader functions work correctly. Run the tests:
pytest tests/ -v
The -v
flag tells pytest to output detailed logs about the tests.
Packaging the Python project
To prepare your application for distribution, configure the metadata in pyproject.toml
:
[build-system]
requires = ["setuptools>=61.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "mcp-document-brain"
version = "0.1.1"
description = "MCP server for converting files to markdown using Markitdown"
readme = "README.md"
authors = [
{name = "Your name", email = "example.email@domain.com"}
]
license = {text = "MIT"}
classifiers = [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
]
requires-python = ">=3.12"
dependencies = [
"mcp[cli]>=1.8.0",
"Markitdown[all]>=0.1.1",
]
[project.optional-dependencies]
dev = [
"build>=1.2.2.post1",
"pytest>=8.3.5",
"twine>=6.1.0",
]
[project.scripts]
mcp-document-brain = "document_brain.server:main"
[tool.setuptools]
package-dir = {"" = "src"}
[tool.pytest.ini_options]
testpaths = ["tests"]
This configuration:
- Sets up basic package metadata
- Declares dependencies
- Creates a command-line entry point
- Configures our development tools
Now, add some documentation in README.md
to provide users with more details about your package.
Build your package using the build tool:
python -m build
This generates distribution files in the dist/
directory.
Publishing your package to PyPI
Before uploading your package to PyPI (the Python Package Index), you will need to complete a few important steps:
- Create a PyPI account if you don’t have one already:
- Go to the PyPI registration page
- Verify your email address after registering
- Set up two-factor authentication (2FA) for better security
- Generate an API token instead of using your password:
- Log in to your PyPI account
- Go to Account Settings → API tokens
- Click “Add API token”, give it a name (like “document-brain-upload”), and create it
- Save the token somewhere safe - you will need it later on in the tutorial.
-
Upload your package using Twine:
twine upload dist/*
- When prompted for credentials, enter:
- Username:
__token__
(type this exactly, including the underscores) - Password: paste your API token
- Username:
For extra security, you can store your PyPI credentials in a .pypirc
file in your home directory. Run:
[pypi]
username = __token__
password = pypi-AgEI...your-token-here...
Once your package is published, anyone can install it with:
pip install mcp-document-brain
# Or using uv
uv add mcp-document-brain
Run it directly:
mcp-document-brain
You can view your published package at https://pypi.org/project/mcp-document-brain/
Troubleshooting tips
- If you get an error about the package name being taken, choose a different name in your
pyproject.toml
file - If uploads fail, make sure your token has the right permissions and has not expired
- Check the PyPI help docs if you run into problems.
Automating Python package publishing with CircleCI and uv
Automating your Python package publishing flow can save you hours of manual effort and reduce human error. In this part, you will learn how to use CircleCI to automate testing, building, and publishing a Python package to PyPI.
This section walks through a complete CircleCI setup using the uv package manager to handle dependencies. You will create a robust workflow that kicks in when you push changes to your main
branch. This ensures publishing of your package only when it is production-ready.
Setting up CircleCI configuration
Create a CircleCI configuration file to automate testing, building, and publishing:
mkdir -p .circleci
touch .circleci/config.yml
Before breaking it down, here is the full .circleci/config.yml
:
version: 2.1
jobs:
build:
docker:
- image: cimg/python:3.12
steps:
- checkout
- run:
name: Install uv
command: |
curl -Ls https://astral.sh/uv/install.sh | sh
echo 'export PATH="$HOME/.cargo/bin:$PATH"' >> $BASH_ENV
source $BASH_ENV
- run:
name: Install dependencies using uv
command: uv pip install --system -r <(uv pip compile --extra dev pyproject.toml)
- run:
name: Run tests
command: python -m pytest tests/ -v
- run:
name: Build package
command: |
python -m build
- persist_to_workspace:
root: .
paths:
- dist
publish:
docker:
- image: cimg/python:3.12
steps:
- checkout
- attach_workspace:
at: .
- run:
name: Install twine
command: pip install --upgrade twine
- run:
name: Upload to PyPI
command: twine upload dist/* -u "$PYPI_USERNAME" -p "$PYPI_PASSWORD"
workflows:
build-test-publish:
jobs:
- build
- publish:
requires:
- build
filters:
branches:
only: main
Understanding the CircleCI configuration
This setup creates two jobs
:
build
job
This is where you define tasks before a release:
- Docker image: Uses CircleCI’s official
cimg/python:3.12
image. - Install uv:
uv
is a faster and stable dependency manager. It replacespip
andpip-tools
. - Install dependencies:
uv pip install --system -r <(uv pip compile --extra dev pyproject.toml)
This command compiles and installs your pyproject.toml
, including both your main and optional dev
dependencies. The --system
flag installs them into the current Python environment.
- Run tests: Executes your test suite with
pytest
. You usepytest
to run all tests inside thetests/
folder and print verbose output. - Build the package: Uses
python -m build
to generate thedist/
folder, which includes.tar.gz
and.whl
files for your package. - Persist the build artifacts: These are saved to a “workspace”, a temporary shared storage between jobs.
publish
This job picks up where build
left off.
- Attach the workspace: Brings the previously built
dist/
folder into this job. - Install Twine: Twine is the recommended tool to securely upload packages to PyPI.
- Upload to PyPI: The actual publishing step happens here with:
twine upload dist/* -u "$PYPI_USERNAME" -p "$PYPI_PASSWORD"
You need to store your credentials as environment variables in your CircleCI project settings.
workflows
: CI/CD logic
This block defines when and how your jobs run.
workflows:
build-test-publish:
jobs:
- build
- publish:
requires:
- build
filters:
branches:
only: main
Here is what it means:
- The
build
job always runs. - The
publish
job only runs afterbuild
completes successfully. - It only triggers if the commit is pushed to the
main
branch.
This design ensures you do not accidentally publish from feature branches or failed builds.
Managing secrets in CircleCI
To securely publish to PyPI, add these environmental variables in CircleCI project settings:
- PYPI_USERNAME
: Set to __token__
- PYPI_PASSWORD
: Your PyPI API token
Publishing to PyPI
In this section, you will trigger the build
and publish
jobs.
Automating the publishing process
With your CircleCI configuration in place, you will trigger deployment as follows:
- Make changes to your code and commit them
- Push to GitHub
- Create a new project in CircleCI and link your repository
- When you are ready to release, either:
- Merge to the main branch, or
- Create and push a tag starting with “v” (e.g.,
v0.1.0
)
- CircleCI tests, builds, and publishes your package to PyPI
The build and publish jobs should run succesfully.
Versioning strategy
For versioning, follow Semantic Versioning:
- MAJOR version for incompatible API changes
- MINOR version for backwards-compatible functionality
- PATCH version for backwards-compatible bug fixes
To release a new version:
- Update the version in
pyproject.toml
- Commit the change
- Create and push a tag:
git tag v0.1.1 git push origin v0.1.1
Verifying the deployment
After the CI pipeline completes, verify that your package by visiting its repository on PyPI. For example, this project is available at https://pypi.org/project/mcp-document-brain/
.
You can also test the installation:
# Create a new virtual environment
python -m venv test_env
source test_env/bin/activate # On Windows: test_env\Scripts\activate
# Install your package from PyPI
pip install mcp-document-brain
# Test that it works
mcp-document-brain --help
For a thorough test, create sample documents and try using your MCP server with an LLM platform that supports the MCP.
Conclusion
Congratulations! You have built a complete MCP server that extends LLM capabilities with document processing tools. You have packaged it for distribution and set up an automated CI/CD pipeline for publishing on PyPI using CircleCI.
This knowledge provides a foundation for creating more sophisticated MCP servers that could:
- Connect to databases or APIs
- Process specialized data formats
- Integrate with external services
- Execute domain-specific algorithms
MCP opens up exciting possibilities for extending LLM capabilities in standardized ways. By combining Python’s flexibility, FastMCP’s developer-friendly API, and CircleCI’s automation, you can build powerful AI-powered tools tailored to specific workflows.
Why does this matter? As AI becomes increasingly integrated into our workflows, the ability to extend LLMs with custom capabilities will be a crucial differentiator. Your custom MCP servers can provide unique value that generic AI solutions simply cannot match.
Ready to take your MCP development skills further? Sign up for CircleCI today and join the growing ecosystem of AI tool developers!