If you’ve been working with LLMs like GPT, LLaMA, or any of their cousins, you know how powerful they are. But you also know how messy things can get when you’re trying to wrangle their outputs into something structured and usable. Pydantic AI is a game-changer for anyone looking to bring order to the chaos of LLM outputs.
LLMs are incredible at generating human-like text, but their outputs are often unstructured. For example, if you ask an LLM to generate a JSON response, it might give you something that looks like JSON but isn’t valid. Or worse, it might hallucinate and give you completely irrelevant data. This unpredictability makes it hard to integrate LLMs into production systems where reliability and structure are key.
Here’s a classic example:
response = llm.generate("Give me details about a user in JSON format.")
print(response)
Output:
{
"name": "John Doe",
"age": 30,
"email": "johndoe@example.com",
"hobbies": ["reading", "coding", "hiking"]
}
Looks good, right? But what if the LLM decides to throw in a typo or an extra comma? Or what if it skips a field entirely? Suddenly, your downstream application breaks.
Pydantic is a Python library that’s already famous for data validation and parsing. It lets you define data models with strict typing, ensuring that your data always conforms to a specific structure. Pydantic AI takes this a step further by integrating seamlessly with LLMs, making it easier to validate, parse, and structure their outputs.
With Pydantic AI, you can:
- Define a schema for the data you expect from the LLM.
- Automatically validate and parse the LLM’s output against that schema.
- Handle errors gracefully when the LLM goes off-script.
Let’s dive into some code to see how this works in practice. Imagine you’re building a chatbot that asks users for their details and stores them in a database. You want to ensure the LLM’s response is always in a specific format.
First, define a Pydantic model that represents the data structure you expect:
from pydantic import BaseModel, EmailStr
class UserDetails(BaseModel):
name: str
age: int
email: EmailStr
hobbies: list[str]
This model ensures that:
name
is a string.age
is an integer.email
is a valid email address.hobbies
is a list of strings.
Now, let’s ask the LLM for user details and validate the response using Pydantic:
import json
from pydantic import ValidationError
# Simulated output
llm_output = '''
{
"name": "Jane Smith",
"age": 28,
"email": "janesmith@example.com",
"hobbies": ["painting", "traveling"]
}
'''
try:
# Parse and validate the LLM output
user_data = UserDetails(json.loads(llm_output))
print("Validated data:", user_data)
except ValidationError as e:
print("Validation error:", e)
Output:
Validated data: name='Jane Smith' age=28 email='janesmith@example.com' hobbies=['painting', 'traveling']
Notice how Pydantic automatically validates the data and ensures it matches the schema. If the LLM had returned invalid data (e.g., a string for age
or a missing field), Pydantic would raise a ValidationError
.
LLMs are notorious for generating unexpected outputs. Let’s see how Pydantic AI handles some common edge cases.
Case 1: Missing Fields
If the LLM forgets a field, Pydantic will catch it:
llm_output_missing_field = '''
{
"name": "Jane Smith",
"age": 28,
"hobbies": ["painting", "traveling"]
}
'''
try:
user_data = UserDetails(json.loads(llm_output_missing_field))
except ValidationError as e:
print("Validation error:", e)
Output:
Validation error: 1 validation error for UserDetails
email
field required (type=value_error.missing)
Case 2: Invalid Data Types
If the LLM returns a string for age
instead of an integer:
llm_output_invalid_type = '''
{
"name": "Jane Smith",
"age": "twenty-eight",
"email": "janesmith@example.com",
"hobbies": ["painting", "traveling"]
}
'''
try:
user_data = UserDetails(json.loads(llm_output_invalid_type))
except ValidationError as e:
print("Validation error:", e)
Output:
Validation error: 1 validation error for UserDetails
age
value is not a valid integer (type=type_error.integer)
Different ways Pydantic AI can elevate LLM workflows:
- Dynamic Responses with Union Types
Sometimes, an LLM’s response could fit multiple schemas (e.g., a chatbot that can return either aJoke
or aFact
). Pydantic’sUnion
types let you handle this gracefully.
from typing import Union
from pydantic import BaseModel
class Joke(BaseModel):
setup: str
punchline: str
class Fact(BaseModel):
statement: str
source: str
ResponseType = Union[Joke, Fact]
llm_output_joke = '''{
"setup": "Why did the Python developer go broke?",
"punchline": "Because he lost his float!"
}'''
llm_output_fact = '''{
"statement": "Honey never spoils.",
"source": "Archaeologists found 3000-year-old honey in Egyptian tombs."
}'''
# Parse dynamically
def parse_response(response: str) -> ResponseType:
return ResponseType.model_validate_json(response)
print(parse_response(llm_output_joke)) # Validates as Joke
print(parse_response(llm_output_fact)) # Validates as Fact
- Custom Validators for Complex Logic
Need to enforce business rules? Pydantic’s custom validators let you add logic beyond basic type checks.
from pydantic import BaseModel, ValidationError, field_validator
class JobApplication(BaseModel):
applicant_name: str
years_of_experience: int
@field_validator("years_of_experience")
def validate_experience(cls, value):
if value < 0:
raise ValueError("Experience can't be negative!")
return value
# LLM output with invalid data
llm_output_negative_exp = '''{
"applicant_name": "Alice",
"years_of_experience": -5
}'''
try:
JobApplication.model_validate_json(llm_output_negative_exp)
except ValidationError as e:
print("Error:", e)
Output:
Error: 1 validation error for JobApplication
years_of_experience
Value error, Experience can't be negative! [type=value_error, input_value=-5, input_type=int]
- Generating Prompts from Models
Use Pydantic models to auto-generate structured prompts for LLMs. This ensures the model knows what format to follow.
class Recipe(BaseModel):
name: str
ingredients: list[str]
steps: list[str]
# Generate a prompt template
prompt_template = f"""
Generate a recipe in JSON format that matches this schema:
{Recipe.model_json_schema()}
Example:
{Recipe(
name="Pancakes",
ingredients=["flour", "eggs", "milk"],
steps=["Mix ingredients", "Cook on a griddle"]
).model_dump_json(indent=2)}
"""
print(prompt_template)
The LLM now receives both the schema and an example, drastically improving output consistency.
- Handling Partial Data (Streaming Responses)
When processing streaming LLM outputs (e.g., token-by-token), Pydantic can validate chunks incrementally usingmodel_construct()
.
class PartialStory(BaseModel):
title: str | None = None
paragraphs: list[str] = []
# Simulate a streaming response
stream_chunks = [
'{"title": "The AI Rebellion", "paragraphs": ["It was a dark and stormy night..."]}',
'{"paragraphs": ["The robots had finally had enough."]}'
]
story = PartialStory()
for chunk in stream_chunks:
update_data = PartialStory.model_validate_json(chunk)
story = story.model_copy(update=update_data.model_dump(exclude_unset=True))
print("Final story:", story)
Output:
Final story: title='The AI Rebellion' paragraphs=['It was a dark and stormy night...', 'The robots had finally had enough.']
- Integration with LangChain
Pair Pydantic with LangChain’sPydanticOutputParser
for end-to-end structured outputs.
from langchain.prompts import PromptTemplate
from langchain.llms import FakeListLLM # Example for testing
from langchain.output_parsers import PydanticOutputParser
# Define model and parser
class Quote(BaseModel):
author: str
text: str
topic: str
parser = PydanticOutputParser(pydantic_object=Quote)
# Build a prompt with format instructions
prompt = PromptTemplate(
template="Generate a quote about {topic}.\n{format_instructions}",
input_variables=["topic"],
partial_variables={"format_instructions": parser.get_format_instructions()},
)
# Simulate an LLM call
llm = FakeListLLM(responses=['{"author": "AI Philosopher", "text": "To err is human; to debug, divine.", "topic": "programming"}'])
chain = prompt | llm | parser
print(chain.invoke({"topic": "programming"}))
Output:
author='AI Philosopher' text='To err is human; to debug, divine.' topic='programming'
- Automated Data Extraction
Scrape unstructured text into structured data using LLMs + Pydantic.
from pydantic import BaseModel
import re
class Event(BaseModel):
name: str
date: str
location: str
# Extract events from messy text
text = """
Upcoming Events:
- PyCon 2024 | May 15-20 | Cleveland, OH
- AI Summit, June 5-7, Virtual
"""
# Ask LLM to parse this into a list of Event models
llm_response = '''
[
{"name": "PyCon 2024", "date": "May 15-20", "location": "Cleveland, OH"},
{"name": "AI Summit", "date": "June 5-7", "location": "Virtual"}
]
'''
events = [Event.model_validate(e) for e in json.loads(llm_response)]
print("Extracted events:", events)
Pydantic AI ensures your LLM outputs are always in the expected format, reducing the risk of downstream errors. With just a few lines of code, you can define and enforce complex data schemas. Pydantic’s detailed error messages make it easy to debug issues with LLM outputs, and it works seamlessly with popular LLM frameworks like LangChain, making it a natural fit for AI pipelines.
Notes:
- Type-Driven Development: Define your data models first—they’ll guide both your LLM prompts and validation logic.
- Error Feedback Loops: Use Pydantic’s errors to improve prompts (e.g., “The LLM keeps missing the
email
field—add an example to the prompt!”). - Schema-as-Documentation: Your Pydantic models double as living documentation for your LLM integrations.