Master hexagonal architecture in Rust
Take the pain out of scaling. This guide has everything you need to write flexible, future-proof Rust applications using hexagonal architecture.
Contents
- Anatomy of a bad Rust application
- Separation of concerns, Rust-style
- The repository pattern in Rust
- Domain models
- Error types and hexagonal architecture
- Implementing AuthorRepository
- Everything but the kitchen async
- From the Very Bad Application to the merely Bad Application
- Testing HTTP handlers with injected repositories
- Service, the heart of hexagonal architecture
- Introducing the Service trait
- main is for bootstrapping
- Why hexagons?
- How to choose the right domain boundaries
- A Rust project template for hexagonal architecture
- Is hexagonal architecture right for me?
- Trade-offs of hexagonal architecture in Rust
- Advanced techniques in hexagonal architecture
- Exercises
- Discussion
Hexagonal architecture. You've heard the buzzwords. You've wondered, "why hexagons?". You think domain-driven design is involved, somehow. Your company probably says they're using it, but you suspect they're doing it wrong.
Let me clear things up for you.
By the end of this guide, you'll have everything you need to write ironclad Rust applications using hexagonal architecture.
I will get you writing the most maintainable Rust of your life. Your production errors will fall. Test coverage will skyrocket. Scaling will get less painful.
If you haven't read The Ultimate Guide to Rust Newtypes yet, I recommend doing so first – type-driven design is the cherry to hexagonal architecture's sundae, and you'll see many examples of newtypes in this tutorial.
Now, this is a big topic. Huge (O'Reilly, hit me up). I'm going to publish it section by section, releasing the next only once you've had a chance to digest the last and tackle the exercises for each new concept. Bookmark this page if you don't want to miss anything – I'll add every new section here.
I'll be using a blogging engine with an axum web server as our primary example throughout this guide. Over time, we'll build it into an application of substantial complexity.
The type of app and the crates it uses are ultimately irrelevant, though. The principles of hexagonal architecture aren't confined to web apps – any application that receives external input or makes requests to the outside world can benefit.
Let's get into it.
Anatomy of a bad Rust application
The answer to the question "why hexagons?" is boring, so we're not going to start there.
How To Code It is all about code! I'm going to start by showing you how not to write applications in Rust. By studying a Very Bad Application, you'll see the problems that hexagonal architecture fixes clearly.
The Very Bad Application is the most common way to write production services. Your company will have code that looks just like it. Zero To Production In Rust writes its tutorial app in a similar way. In fairness, it has its hands full with teaching us Rust, and it only promised to get us to production, not keep us there...
The Very Bad Application is a scaling and maintainability time bomb. It is a misery to test and refactor. It will increase your staff turnover and lower your annual bonus.
Here's main.rs
:
//! src/bin/server/main.rs
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let config = Config::from_env()?;
// A minimal tracing middleware for request logging.
tracing_subscriber::fmt::init();
let trace_layer = tower_http::trace::TraceLayer::new_for_http().make_span_with(
|request: &axum::extract::Request<_>| {
let uri = request.uri().to_string();
tracing::info_span!("http_request", method = ?request.method(), uri)
},
1
);
let sqlite = SqlitePool::connect_with(
2
SqliteConnectOptions::from_str(&config.database_url)
.with_context(|| format!("invalid database path {}", &config.database_url))?
.pragma("foreign_keys", "ON"),
)
.await
.with_context(|| format!("failed to open database at {}", &config.database_url))?;
let app_state = AppState {
sqlite: Arc::new(sqlite),
3
};
let router = axum::Router::new()
4
.route("/authors", post(create_author))
.layer(trace_layer)
.with_state(app_state);
let listener = net::TcpListener::bind(format!("0.0.0.0:{}", &config.server_port))
.await
.with_context(|| format!("failed to listen on {}", &config.server_port))?;
tracing::debug!("listening on {}", listener.local_addr().unwrap());
axum::serve(listener, router)
.await
.context("received error from running server")?;
Ok(())
}
This code loads the application config from the environment, configures some tracing middleware, creates an Sqlite connection pool, and injects it into an axum HTTP router. We have one route: POST /authors
, for creating blog post authors. Finally, it binds a Tokio TcpListener
to the application port and fires up the server.
We're concerned about architecture, so I've omitted details like a panic recovery layer, the finer points of tracing, graceful shutdown, and most of the routes a full app would have.
Even so, this is a fat main
function. If you're tempted to say that it could be improved by moving the application setup logic to a dedicated setup
module, you're not wrong – but your priorities are. There is much greater evil here.
Firstly, why is main
configuring HTTP middleware 1
? In fact, it looks like main
needs an intimate understanding of the whole axum crate just to get the server running 4
! axum isn't even part of our codebase – it's a third-party dependency that has escaped containment.
You'd have the same problem if this code lived in a setup
module. It's not the location of the setup, but the failure to encapsulate and abstract dependencies that makes this code hard to maintain.
If you ever change your HTTP server, main
has to change too. To add middleware, you modify main
. Major version changes in axum could force you to change main
.
We have the same issue with the database at 2
, where we shackle our main function to one particular, third-party implementation of an Sqlite client. We then make things much worse by flowing this concrete representation – an imported struct outside our control – through the whole application. See how we pass sqlite
into axum as a field of AppState
3
to make it accessible to our HTTP handlers?
To change your database client – not even to change the kind of database, just the code that calls it – you'd have to rip out this hard dependency from every corner of your application.
This isn't a leaky abstraction, it's a broken dam.
Take a moment to recover, because I'm about to show you the create_author
handler, and it's a bloodbath.
//! src/lib/routes.rs
// Various definitions omitted...
pub async fn create_author(
State(state): State<AppState>,
5
Json(author): Json<CreateAuthorRequestBody>,
) -> Result<ApiSuccess<CreateAuthorResponseData>, ApiError> {
if author.name.is_empty() {
6
return Err(ApiError::UnprocessableEntity(
"author name cannot be empty".to_string(),
));
}
let mut tx = state
7
.sqlite
.begin()
.await
.context("failed to start transaction")?;
let author_id = save_author(&mut tx, &author.name).await.map_err(|e| {
if is_unique_constraint_violation(&e) {
8
ApiError::UnprocessableEntity(format!(
"author with name {} already exists",
&author.name
))
} else {
anyhow!(e).into()
}
})?;
tx.commit().await.context("failed to commit transaction")?;
Ok(ApiSuccess::new(
StatusCode::CREATED,
CreateAuthorResponseData {
id: author_id.to_string(),
},
))
}
Stay with me! Suppress the urge to vomit. We'll get through this together and come out as better Rust devs.
Look, there's that hard dependency on sqlx 5
, polluting the system on cue 🙄. And holy good god, our HTTP handler is orchestrating database transactions 7
. An HTTP handler shouldn't even know what a database is, but this one knows SQL!
//! src/lib/routes.rs
async fn save_author(tx: &mut Transaction<'_, Sqlite>, name: &str) -> Result<Uuid, sqlx::Error> {
let id = Uuid::new_v4();
let id_as_string = id.to_string();
let query = sqlx::query!(
"INSERT INTO authors (id, name) VALUES ($1, $2)",
id_as_string,
name
);
tx.execute(query).await?;
Ok(id)
}
And the horrifying consequence of this is that the handler also has to understand the specific error type of the database crate – and the database itself 8
:
//! src/lib/routes.rs
const UNIQUE_CONSTRAINT_VIOLATION_CODE: &str = "2067";
fn is_unique_constraint_violation(err: &sqlx::Error) -> bool {
if let sqlx::Error::Database(db_err) = err {
if let Some(code) = db_err.code() {
if code == UNIQUE_CONSTRAINT_VIOLATION_CODE {
return true;
}
}
}
false
}
Refactoring this kind of code is miserable, you get that. But here's the kicker – unit testing this kind of code is impossible.
You cannot call this handler without a real, concrete instance of an sqlx Sqlite connection pool.
And don't come at me with "it's fine, we can still integration test it", because that's not enough. Look at how complex the error handling is. We've got inline request body validation 6
, transaction management 7
, and sqlx errors 8
in one function.
Integration tests are slow and expensive – they aren't suited to exhaustive coverage. And how are you going to test the scenario where the transaction fails to start? Will you make the real database fall over?
This architecture is game over for maintainability. Nightmare fuel.
Hard dependencies and hexagonal architecture: how to make the right call
Hard dependencies aren't irredeemably evil – you'll see several as we build our hexagonal answer to the Very Bad Application – but they are use case-dependent.
Tokio is a hard dependency of most production Rust applications. This is by necessity. An async runtime is a dependency on a grand scale, practically part of the language itself. Your application can't function without it, and its purpose is so fundamental that you'd gain nothing from attempting to abstract it away.
In these situations, consider the few alternatives carefully, and accept that changing your mind later will mean a painful refactor. Most of all, look for evidence of widespread community adoption and support.
HTTP packages, database clients, message queues, etc. do not fall into this category. Teams opt to change these dependencies regularly, for reasons including:
- scaling pressures that require new technical solutions
- deprecation of key libraries
- security threats
- someone more senior said so.
It's critical that we abstract these packages behind our own, clean interfaces, forcing conformity with our application. In the next part of this guide, you'll learn how to do exactly that.
Hexagonal architecture brings order to chaos and flexibility to fragile programs by making it easy to create modular applications where connections to the outside world always adhere to the most important API of all: your business domain.
Separation of concerns, Rust-style
Our transition to hexagonal architecture begins here. We'll move from a tightly coupled, untestable nightmare to a happy place where production doesn't fall over at 3am.
We're going to transform the Very Bad Application gradually, zooming out a little at a time until you see the whole hexagon. Then I'll answer "why hexagons?". Promise.
I've omitted details like module names and folder structure for simplicity. Don't worry, though. Before this guide is over, you'll have a complete application template you can reuse across all your projects.
The repository pattern in Rust
The worst part of the Very Bad Application is undoubtedly having an HTTP handler making direct queries to an SQL database. This is a plus-sized violation of the Single Responsibility Principle.
Code that understands the HTTP request-response cycle shouldn't also understand SQL. Code that needs a database doesn't need to know how that database is implemented. These could not be more different concerns.
Hard-coding your handler to manage SQL transactions will come back to bite you if you switch to Mongo. That Mongo connection will need ripping out if you move to event streaming, to querying a CQRS service, or to making an intern fetch data on foot.
All of these are valid data stores. If you overcommit by hard-wiring any one of them into your system, you guarantee future pain when you can least afford it – when you need to scale.
Repository is the general term for "some store of data". Our first step is to move the create_author
handler away from SQL and towards the abstract concept of a repository.
A handler that says "give me any store of data" is much better than a handler that says "give me this specific store of data, because it's the only one I know how to use".
Your mind has undoubtedly turned to traits as Rust's way of defining behaviour as opposed to structure. How very astute of you. Let's define an AuthorRepository
trait:
/// `AuthorRepository` represents a store of author data.
pub trait AuthorRepository {
/// Persist a new [Author].
///
/// # Errors
///
/// - MUST return [CreateAuthorError::Duplicate] if an [Author] with the same [AuthorName]
/// already exists.
fn create_author(
&self,
req: &CreateAuthorRequest,
9
) -> Result<Author, CreateAuthorError>>;
10
}
An AuthorRepository
is some store of author data with (currently) one method: create_author
.
create_author
takes a reference to the data required to create an author 9
, and returns a Result
containing either a saved Author
, or a specific error type describing everything that might go wrong while creating an author 10
. Right now, that's just the existence of duplicate authors, but we'll come back to error handling.
AuthorRepository
is what's known as a domain trait. You might also have heard the term "port" before – a point of entry to your business logic. A concrete implementation of a port (say, an SQLite AuthorRepository
) is called an adapter. Starting to sound familiar?
For any code that requires access to a store of author data, this port is the source of truth for how every implementation behaves. Callers no longer have to think about SQL or message queues, they just invoke this API, and the underlying adapter does all the hard work.
Domain models
CreateAuthorRequest
, Author
and CreateAuthorError
are all examples of domain models.
Domain models are the canonical representations of data accepted by your business logic. Nothing else will do. Let's see some definitions:
/// A uniquely identifiable author of blog posts.
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct Author {
11
id: Uuid,
name: AuthorName,
}
impl Author {
pub fn new(id: Uuid, name: AuthorName) -> Self {
Self { id, name }
}
pub fn id(&self) -> &Uuid {
&self.id
}
pub fn name(&self) -> &AuthorName {
&self.name
}
}
/// A validated and formatted name.
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct AuthorName(String);
#[derive(Clone, Debug, Error)]
#[error("author name cannot be empty")]
pub struct AuthorNameEmptyError;
impl AuthorName {
pub fn new(raw: &str) -> Result<Self, AuthorNameEmptyError> {
let trimmed = raw.trim();
if trimmed.is_empty() {
Err(AuthorNameEmptyError)
} else {
Ok(Self(trimmed.to_string()))
}
}
}
/// The fields required by the domain to create an [Author].
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, From)]
pub struct CreateAuthorRequest {
12
name: AuthorName,
}
impl CreateAuthorRequest {
// Constructor and getters omitted
}
#[derive(Debug, Error)]
pub enum CreateAuthorError {
13
#[error("author with name {name} already exists")]
Duplicate { name: AuthorName },
#[error(transparent)]
Unknown(#[from] anyhow::Error),
// to be extended as new error scenarios are introduced
}
Now, these aren't very exciting models (they'll get more exciting when we talk about identifying the correct domain boundaries and the special concern of authentication in part 3). But they demonstrate how the domain defines what data flowing through the system must look like.
If you don't construct a valid CreateAuthorRequest
from the raw parts you've received over the wire (or from that intern), you can't call AuthorRepository::
. Sorry, jog on. 🤷
This pattern of newtyping should be familiar if you've read The Ultimate Guide to Rust Newtypes. If you haven't, I'll wait for you here.
Four special properties arise from this:
- Your business domain becomes the single source of truth for what it means to be an author, user, bank transaction or stock trade.
- The flow of dependencies in your application points in only one direction: towards your domain.
- Data structures within your domain are guaranteed to be in a valid state.
- You don't allow third-party implementation details, like SQL transactions or RPC messages to flow through unrelated code.
And this has immediate practical benefits:
- Easier navigation of the codebase for veterans and new joiners.
- It's trivial to implement new data stores or input sources – you just implement the corresponding domain trait.
- Refactoring is dramatically simplified thanks to the absence of hard-coded implementation details. If an implementation of a domain trait changes, nothing about the domain code or anything downstream from it needs to change.
- Testability skyrockets, because any domain trait, like
AuthorRepository
can be mocked. We'll see this in action shortly.
Why do we distinguish CreateAuthorRequest
12
from Author
11
? Surely we could represent both saved and unsaved authors as
pub struct Author {
id: Option<Uuid>,
name: AuthorName,
}
Right now, with this exact application, this would be fine. It might be annoying to check whether id
is Some
or None
to distinguish whether Author
is saved or unsaved, but it would work.
However, we'd be mistaken in assuming that the data required to create an Author
and the representation of an existing Author
will never diverge. This is not at all true of real applications.
I've done a lot of work in onboarding and ID verification for fintechs. The data required to fully represent a customer is extensive. It can take many weeks to collect it and make an account fully operational.
This is pretty poor as a customer experience, and abandonment would be high if you took an all-or-nothing approach to account creation.
Instead, an initial outline of a customer's details is usually enough to create a basic profile. You get the customer interacting with the platform as soon as possible, and stagger the collection of the remaining data, fleshing out the model over time.
In this scenario, you don't want to represent some CreateCustomerRequest
and Customer
in the same way. Customer
may contain dozens of optional fields and relations that aren't required to create a record in the database. It would be brittle and inefficient to pass such a needlessly large struct when creating a customer.
What happens when the domain representation of a customer changes, but the data required to create one remains the same? You'd be forced to change your request handling code too. Or, you'd be forced to do what you should have done from the start – decouple these models.
Hexagonal architecture is about building for change. Although these models may look like duplicative boilerplate to begin with, don't be fooled. Your application will change. Your API will diverge from your domain representation.
By modelling persistent entities separately from requests to create them, you encode an incredible capacity to scale.
Error types and hexagonal architecture
Let's zoom in on CreateAuthorError
13
. It reveals some important properties of domain models and traits.
CreateAuthorError
doesn't define failure cases such as an input name being invalid. This is the responsibility of the CreateAuthorRequest
constructor (which in this case delegates to the AuthorName
constructor). Here's more on using newtype constructors as the source of truth if you're unclear on this point.
CreateAuthorError
defines failures that arise from coordinating the action of adapters. There are two categories: violations of business rules, like attempting to create a duplicate author, and unexpected errors that the domain doesn't know how to handle.
Much as our domain would like to pretend the real world doesn't exist, many things can go wrong when calling a database. We could fail to start a transaction, or fail to commit it. The database could literally catch fire in the course of a request.
The domain doesn't know anything about database implementations. It doesn't know about transactions. It doesn't know about the fire hazards posed by large datacenters and your pyromaniac intern. It's oblivious to retry strategies, cache layers and dead letter queues (we'll talk about these in part 5 – Advanced Techniques in Hexagonal Architecture).
But it needs some mechanism to propagate unexpected errors back up the call chain. This is typically achieved with a catch-all variant, Unknown
, which wraps a general error type. anyhow::Error
is particularly convenient for this, since it includes a backtrace for any error it wraps.
As a result (no pun intended), CreateAuthorError
is a complete description of everything that can go wrong when creating an author.
This is incredible news for callers of domain traits – immensely powerful. Any code calling a port has a complete description of every error scenario it's expected to handle, and the compiler will make sure that it does.
But enough theorizing! Let's see this in practice.
Implementing AuthorRepository
Here, I move the code required to interact with an SQLite database out of the Very Bad Application's create_author
handler and into an implementation of AuthorRepository
.
We start by wrapping an sqlx connection pool in our own Sqlite
type. Module paths for sqlx types are fully qualified to avoid confusion:
#[derive(Debug, Clone)]
pub struct Sqlite {
pool: sqlx::SqlitePool,
}
impl Sqlite {
pub async fn new(path: &str) -> anyhow::Result<Sqlite> {
14
let pool = sqlx::SqlitePool::connect_with(
sqlx::sqlite::SqliteConnectOptions::from_str(path)
.with_context(|| format!("invalid database path {}", path))?
15
.pragma("foreign_keys", "ON"),
)
.await
.with_context(|| format!("failed to open database at {}", path))?;
Ok(Sqlite { pool })
}
}
Wrapping types like sqlx::SqlitePool
has the benefit of encapsulating a third-party dependencies within code of our own design. Remember the Very Bad Application's leaky main
function 1
? Wrapping external libraries and exposing only the functionality your application needs is how we plug the leaks.
Again, don't worry about module structure for now. Get comfortable with the type definitions, then we'll assemble the pieces.
This constructor does what you'd expect, with the possible exception of the Result
it returns. Since the constructor isn't part of the AuthorRepository
trait, we're not bound by its strict opinions on the types of allowable error.
anyhow is an excellent crate for working with non-specific errors. anyhow::Result
is equivalent to std::
, and anyhow::Error
says we don't care which error occurred, just that one did.
At the point where most applications are instantiating databases, the only reasonable thing to do with an error is log it to stdout
or some log aggregation service. Sqlite::new
simply wraps any sqlx error it encounters with some extra context 15
.
Now, the exciting stuff – the implementation of AuthorRepository
:
impl AuthorRepository for Sqlite {
async fn create_author(&self, req: &CreateAuthorRequest) -> Result<Author, CreateAuthorError> {
let mut tx = self
16
.pool
.begin()
.await
.context("failed to start SQLite transaction")?;
let author_id = self.save_author(&mut tx, req.name())
17
.await
.map_err(|e| {
if is_unique_constraint_violation(&e) {
18
CreateAuthorError::Duplicate {
name: req.name().clone(),
}
} else {
anyhow!(e)
.context(format!("failed to save author with name {:?}", req.name()))
.into()
19
}
})?;
tx.commit()
.await
.context("failed to commit SQLite transaction")?;
Ok(Author::new(author_id, req.name().clone()))
}
}
Look! Transaction management is now encapsulated within our Sqlite
implementation of AuthorRepository
. The HTTP handler no longer has to know about it.
create_author
invokes the save_author
method on Sqlite
, which isn't specified by the AuthorRepository
trait, but gives Sqlite
the freedom to manage transactions as it requires.
This is the beauty of abstracting implementation details behind traits. The trait defines what needs to happen, and the implementation decides how. None of the how is visible to code calling a trait method.
Sqlite
's implementation of AuthorRepository
knows all about SQLite error codes, and transforms any error corresponding to a duplicate author into the domain's preferred representation 18
.
Of course, Sqlite
, not being part of the domain's Garden of Eden, may encounter an error that the domain can't do much with 19
.
This is a 500 Internal Server Error
in the making, but repositories shouldn't know about HTTP status codes. We need to pass it back up the chain in the form of CreateAuthorError::
, both to inform the end user that something fell over, and to capture for debugging.
This is a situation that the program – or at least the request handler – can't recover from. Couldn't we panic
? The domain can't do anything useful here, so why not skip the middleman and let the panic recovery middleware handle it?
Don't panic
Until very recently, I would have said yes – if the domain can't do any useful work with an error, panicking will save you from duplicating error handling logic between your request handler and your panic-catching middleware.
However, thanks to a comment from matta and a horrible realization I had in the shower, I've reversed my position.
Whether or not you consider the database falling over a recoverable error, there are two incontrovertible reasons not to panic:
- Panicking poisons held mutexes. If your application state is protected by an
Arc<Mutex<T>>
, for someT
that isn'tSend + Sync
, panicking while you hold the guard will mean no other thread will ever be able to acquire it again. Your program is dead, and no amount of panic recovery middleware will bring it back. - Other Rust devs won't expect you to panic. Most likely, you won't be the person woken at 3am to debug your code. Strive to make it as unsurprising as possible. Follow established error handling conventions diligently. Return errors, don't panic.
What about retry handling? Good question. We'll cover that in part 5, Advanced Techniques in Hexagonal Architecture.
Everything but the kitchen async
Have you spotted it? The mismatch between our repository implementation and the trait definition?
Ok, you caught me. I simplified the definition of AuthorRepository
. There's actually more to it, because of course we want database calls to be async.
Writing to a file or calling a database server is precisely the kind of slow, blocking IO that we don't want to stall on.
We need to make AuthorRepository
an async trait. Unfortunately, it's not quite as simple as writing
pub trait AuthorRepository {
async fn create_author(
&self,
req: &CreateAuthorRequest,
) -> Result<Author, CreateAuthorError>>;
}
Rust understands this, and it will compile, but probably won't do what you expect.
Although writing async fn
will cause your method's return value to be sugared into Future<Output = Result<Author, CreateAuthorError>>
, it won't get an automatic Send
bound.
As a result, your Future
can't be sent between threads. For web applications, this is useless.
Let's spell things out for the compiler!
pub trait AuthorRepository {
fn create_author(
&self,
req: &CreateAuthorRequest,
) -> impl Future<Output = Result<Author, CreateAuthorError>> + Send;
20
}
Since our Author
and CreateAuthorError
are both Send
, a Future
that wraps them can be too 20
.
But what good is a repository if its methods return thread-safe Future
s, but the repo itself is stuck on just one? Let's ensure AuthorRepository
is Send
too.
pub trait AuthorRepository: Send {
// ...
}
Ugh, we're not done. Remember about 4,000 words ago when we wrapped our application state in an Arc
to inject into an HTTP handler? Well, trust me, we did.
Arc
requires its contents to be both Send
and Sync
to be either Send
or Sync
itself! Here's a good discussion on the topic if you'd like to know more.
pub trait AuthorRepository: Send + Sync {
// ...
}
Your instinct might now be to implement AuthorRepository
for &Sqlite
, since &T
is immutable and therefore Send + Sync
. However, sqlx's connection pools are themselves Send + Sync
, meaning Sqlite
is too.
Are we done yet?
🙃
Naturally, if we're sharing a repo between threads, Rust wants to be sure it won't be dropped unexpectedly. Let's reassure the compiler by making every AuthorRepository
'static
:
pub trait AuthorRepository: Send + Sync + 'static {
// ...
}
Finally, our web server, axum, requires injected data to be Clone
, giving our final trait definition:
pub trait AuthorRepository: Clone + Send + Sync + 'static {
/// Asynchronously persist a new [Author].
///
/// # Errors
///
/// - MUST return [CreateAuthorError::Duplicate] if an [Author] with the same [AuthorName]
/// already exists.
fn create_author(
&self,
req: &CreateAuthorRequest,
) -> impl Future<Output = Result<Author, CreateAuthorError>> + Send;
}
From the Very Bad Application to the merely Bad Application
It's time to start putting these pieces together. Let's reassemble our create_author
HTTP handler to take advantage of the AuthorRepository
abstraction.
First, the definition of AppState
, which is the struct that contains the resources that should be available to every HTTP handler. This pattern should be familiar to users of both axum and Actix Web.
#[derive(Debug, Clone)]
/// The application state available to all request handlers.
struct AppState<AR: AuthorRepository> {
author_repo: Arc<AR>,
21
}
AppState
is now generic over AuthorRepository
. That is, AppState
provides HTTP handlers with access to "some store of author data", giving them the ability to create authors without knowledge of the implementation.
We wrap whatever instance of AuthorRepository
we receive in an Arc
, because axum is going to share it between as many async tasks as there are requests to our application.
This isn't our final destination – eventually our HTTP handler won't even know it has to save something (ah, sweet oblivion).
We're not quite there yet, but this is a vast improvement. Check out the handler!
pub async fn create_author<AR: AuthorRepository>(
State(state): State<AppState<AR>>,
22
Json(body): Json<CreateAuthorHttpRequestBody>,
) -> Result<ApiSuccess<CreateAuthorResponseData>, ApiError> {
let domain_req = body.try_into_domain()?;
23
state
.author_repo
.create_author(&domain_req)
.await
.map_err(ApiError::from)
24
.map(|ref author| ApiSuccess::new(StatusCode::CREATED, author.into()))
25
}
Oh my.
Isn't it beautiful?
Doesn't your nervous system feel calmer to behold it?
Go on, take some deep breaths. Enjoy the moment. Here's the crime scene we started from if you need a reminder.
Ok, the walkthrough. create_author
has access to an AuthorRepository
22
, which it makes good use of. But first, it converts the raw CreateAuthorHttpRequestBody
it received from the client into the holy domain representation 23
. Here's how:
/// The body of an [Author] creation request.
#[derive(Debug, Clone, PartialEq, Eq, Deserialize)]
pub struct CreateAuthorHttpRequestBody {
name: String,
}
impl CreateAuthorHttpRequestBody {
/// Converts the HTTP request body into a domain request.
fn try_into_domain(self) -> Result<CreateAuthorRequest, AuthorNameEmptyError> {
let author_name = AuthorName::new(&self.name)?;
Ok(CreateAuthorRequest::new(author_name))
}
}
Nothing fancy! Boilerplatey, you might think. This is by design. We have preemptively decoupled the HTTP API our application exposes to the world from the internal domain representation.
As you scale, you will thank this so-called boilerplate. You will name your firstborn child for it.
These two things can now change independently. Changing the domain doesn't necessarily force a new web API version. Changing the HTTP request structure doesn't require any change to the domain. Only the mapping in CreateAuthorHttpRequestBody::
and its corresponding unit tests get updated.
This is a very special property. Changes to transport concerns or business logic no longer spread through your program like wildfire. Abstraction has been achieved.
Thanks to the pains we took to define all the errors an AuthorRepository
is allowed to return, constructing an HTTP response is dreamy. In the error case, we map seamlessly to a serializable ApiError
using ApiError::from
24
:
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum ApiError {
InternalServerError(String),
26
UnprocessableEntity(String),
27
}
impl From<CreateAuthorError> for ApiError {
fn from(e: CreateAuthorError) -> Self {
match e {
CreateAuthorError::Duplicate { name } => {
Self::UnprocessableEntity(format!("author with name {} already exists", name))
28
}
CreateAuthorError::Unknown(cause) => {
tracing::error!("{:?}\n{}", cause, cause.backtrace());
Self::InternalServerError("Internal server error".to_string())
}
}
}
}
impl From<AuthorNameEmptyError> for ApiError {
fn from(_: AuthorNameEmptyError) -> Self {
Self::UnprocessableEntity("author name cannot be empty".to_string())
}
}
If the author was found to be a duplicate, it means the client's request was correctly structured, but that the contents were unprocessable. Hence, we're aiming to respond 422 Unprocessable Entity
27
.
Important detail alert! Do you see how we're manually building an error message at 28
even though CreateAuthorError::
could have produced this error for us?
This is another instance of aggressive decoupling of our transport concern (JSON over HTTP) from the domain. Returning full-fat, unpasteurised domain errors to users is an easy way to leak private details of your application. It also results in unexpected changes to HTTP responses when domain implementation details change!
If we get an error the domain didn't expect – CreateAuthorError::
here – that maps straight to InternalServerError
26
.
The finer points of how you log the underlying cause will vary according to your needs. Crucially, however, the error itself is not exposed to the end user.
Finally, our success case 25
. We take a reference to the returned Author
and transform it into its public API counterpart. It gets sent on its way with status 201 Created
.
/// The response body data field for successful [Author] creation.
#[derive(Debug, Clone, PartialEq, Eq, Serialize)]
pub struct CreateAuthorResponseData {
id: String,
}
impl From<&Author> for CreateAuthorResponseData {
fn from(author: &Author) -> Self {
Self {
id: author.id().to_string(),
}
}
}
Chef's kiss. 🧑🍳
Testing HTTP handlers with injected repositories
Oh, it gets better.
Previously, our handler code was impossible to unit test, because we needed a real database instance to call them. Trying to exercise every failure mode of a database call with a real database is pure pain.
Those days are over. By injecting any type that implements AuthorRepository
, we open our HTTP handlers to unit testing with mock repositories.
#[cfg(test)]
mod tests {
// Imports omitted.
#[derive(Clone)]
struct MockAuthorRepository {
create_author_result: Arc<Mutex<Result<Author, CreateAuthorError>>>,
29
}
impl AuthorRepository for MockAuthorRepository {
async fn create_author(
&self,
_: &CreateAuthorRequest,
) -> Result<Author, CreateAuthorError> {
let mut guard = self.create_author_result.lock().await;
let mut result = Err(CreateAuthorError::Unknown(anyhow!("substitute error")));
mem::swap(guard.deref_mut(), &mut result);
result
30
}
}
}
MockAuthorRepository
is defined to hold the Result
it should return in response to AuthorRepository::
calls 29
30
.
The rather nasty type signature at 29
is due to the fact that AuthorRepository
has a Clone
bound, which means MockAuthorRespository
must be Clone
.
Unfortunately for us, CreateAuthorError
isn't Clone
, because its Unknown
variant contains anyhow::Error
. anyhow::Error
isn't Clone
as it's designed to wrap unknown errors, which may not be Clone
themselves. std::io::Error
is one common non-Clone
error.
Rather than passing MockAuthorRepository
a convenient Result<Author, CreateAuthorError>
, we need to give it something cloneable – Arc
. But, as discussed, Arc
's contents need to be Send + Sync
for Arc
to be Send + Sync
, so we're forced to wrap the Result
in a Mutex
. (I'm using a tokio::sync::Mutex
here, hence the await
, but std::sync::Mutex
also works with minor changes to the supporting code).
The mock implementation of create_author
then deals with swapping a dummy value with the real result in order to return it to the test caller.
Here's the test for the case where the repository call succeeds. I leave the error case to your powerful imagination, but if you crave more Rust testing pearls, I'll have a comprehensive guide to unit testing for you soon!
#[tokio::test(flavor = "multi_thread")]
async fn test_create_author_success() {
let author_name = AuthorName::new("Angus").unwrap();
let author_id = Uuid::new_v4();
let repo = MockAuthorRepository {
31
create_author_result: Arc::new(Mutex::new(Ok(Author::new(
author_id,
author_name.clone(),
)))),
};
let state = axum::extract::State(AppState {
author_repo: Arc::new(repo),
});
let body = axum::extract::Json(CreateAuthorHttpRequestBody {
name: author_name.to_string(),
});
let expected = ApiSuccess::new(
32
StatusCode::CREATED,
CreateAuthorResponseData {
id: author_id.to_string(),
},
);
let actual = create_author(state, body).await;
33
assert!(
actual.is_ok(),
"expected create_author to succeed, but got {:?}",
actual
);
let actual = actual.unwrap();
assert_eq!(
actual, expected,
"expected ApiSuccess {:?}, but got {:?}",
expected, actual
)
}
At 31
, we construct a MockAuthorRepository
with an arbitrary success Result
. We expect that a Result::Ok(Author)
from the repo should produce a Result::
from the handler 32
.
This situation is simple to set up – we just call the create_author
handler with a State
object constructed from the MockAuthorRepository
in place of a real one 33
. The assertions are self-explanatory.
I know, I know – you're itching to see what main
looks like with these improvements, but we're about to take a much bigger and more important leap in our understanding of hexagonal architecture.
In part 3, coming next, I'll introduce you to the beating heart of an application domain: the Service
.
We'll ratchet up the complexity of our example application to understand how to set domain boundaries. We'll confront the tricky problem of master records through the lens of authentication, and explore the interface between hexagonal applications and distributed systems.
And yes, we'll finally answer, "why hexagons?".
Service
, the heart of hexagonal architecture
Introducing the Service
trait
The Repository
trait does a great job at getting datastore implementation details out of code that handles incoming requests.
If our application really was as simple as the one I've described so far, this would be good enough.
But most real applications aren't this simple, and their domain logic involves more than writing to a database and responding 201
.
For example, each time a new author is successfully created, we might want to dispatch an event for other parts of our system to consume asynchronously.
Perhaps we want to track metrics related to author creation in a time series database like Prometheus? Or send a welcome email?
This sequence of conditional steps is domain logic. We've already seen that domain logic doesn't belong in adapters. Otherwise, when you swap out the adapter, you have to rewrite domain code that has nothing to do with the adapter implementation.
So, domain logic can't go in our HTTP handler, and it can't go in our AuthorRepository
. Where does it live?
A Service
.
A Service
refers to both a trait that declares the methods of your business API, and an implementation that's provided by the domain to your inbound adapters.
It abstracts calls to databases, sending of notifications and collection of metrics from your handlers behind a clean, mockable interface.
Currently, our axum application state looks like this:
#[derive(Debug, Clone)]
/// The application state available to all request handlers.
struct AppState<AR: AuthorRepository> {
author_repo: Arc<AR>,
}
Let's spice up our application with some more domain traits:
/// `AuthorMetrics` describes an aggregator of author-related metrics, such as a time-series
/// database.
pub trait AuthorMetrics: Send + Sync + Clone + 'static {
34
/// Record a successful author creation.
fn record_creation_success(&self) -> impl Future<Output = ()> + Send;
/// Record an author creation failure.
fn record_creation_failure(&self) -> impl Future<Output = ()> + Send;
}
/// `AuthorNotifier` triggers notifications to authors.
pub trait AuthorNotifier: Send + Sync + Clone + 'static {
35
fn author_created(&self, author: &Author) -> impl Future<Output = ()> + Send;
}
Together with AuthorRepository
, these ports illustrate the kinds of dependencies you might expect of a real production app.
AuthorMetrics
34
describes an aggregator of author-related metrics, such as a time series database. AuthorNotifier
35
sends notifications to authors.
Rather than stuffing these domain dependencies into AppState
directly, we're aiming for this:
#[derive(Debug, Clone)]
/// The application state available to all request handlers.
struct AppState<AS: AuthorService> {
author_service: Arc<AS>,
}
How do we get there? Let's start with the Service
trait definition:
pub trait AuthorService: Clone + Send + Sync + 'static {
/// Asynchronously create a new [Author].
///
/// # Errors
///
/// - [CreateAuthorError::Duplicate] if an [Author] with the same [AuthorName] already exists.
fn create_author(
&self,
req: &CreateAuthorRequest,
) -> impl Future<Output = Result<Author, CreateAuthorError>> + Send;
}
Much like AuthorRepository
, the Service
trait has an async method, create_author
, that takes a CreateAuthorRequest
by reference and returns a Future
that outputs either an Author
, if creation was successful, or a CreateAuthorError
if not.
Although the signatures of AuthorService
and AuthorRepository
look similar, this is a byproduct of a simple domain. They aren't required to match, and by separating our concerns with traits in this way, we allow them to diverge in future.
Now, the implementation of AuthorService
:
/// Canonical implementation of the [AuthorService] port, through which the author domain API is
/// consumed.
#[derive(Debug, Clone)]
pub struct Service<R, M, N>
36
where
R: AuthorRepository,
M: AuthorMetrics,
N: AuthorNotifier,
{
repo: R,
metrics: M,
notifier: N,
}
// Constructor implementation omitted
impl<R, M, N> AuthorService for Service<R, M, N>
where
R: AuthorRepository,
M: AuthorMetrics,
N: AuthorNotifier,
{
/// Create the [Author] specified in `req` and trigger notifications.
///
/// # Errors
///
/// - Propagates any [CreateAuthorError] returned by the [AuthorRepository].
async fn create_author(&self, req: &CreateAuthorRequest) -> Result<Author, CreateAuthorError> {
let result = self.repo.create_author(req).await;
37
if result.is_err() {
self.metrics.record_creation_failure().await;
} else {
self.metrics.record_creation_success().await;
self.notifier.author_created(result.as_ref().unwrap()).await;
}
result
}
}
The Service
struct encapsulates the dependencies required to execute our business logic 36
.
The implementation of AuthorService::
37
illustrates why we don't want to embed these calls directly in handler code, which has enough work to do just managing the request-response cycle.
First, we call the AuthorRepository
to persist the new author, then we branch. On a failed repository call, we call AuthorMetrics
to track the failure. On success, we submit success metrics, then trigger notifications. In both cases, we propagate the repository Result
to the caller.
I defined the AuthorMetrics
and AuthorNotifier
methods as infallible, since metric aggregation and notification dispatch typically take place concurrently, with separate error handling paths.
Not always, though. Imagine if the metrics and notifier calls also returned errors. Suddenly, our test scenarios include:
- Calls to all three dependencies succeed.
- The repo call fails, and the metrics call fails too.
- The repo call fails, and the metrics call succeeds.
- The repo call succeeds, but the metrics fall over.
- The repo and metrics calls succeed, but the notifier call fails.
Now picture every permutation of these cases with the all of Result
s produced when receiving, parsing and responding to HTTP requests 🤯.
This is what happens if you stick domain logic in your handlers. Without a Service
abstraction, you have to integration test this hell.
Nope. No. Not today, thank you.
To unit test handlers that call a Service
, you just mock the service, returning whatever success or error variant you need to check the handler's output.
To unit test a Service
, you mock each of its dependencies, returning the successes and errors required to exercise all of the paths described above.
Finally, you integration test the whole system, focusing on your happy paths and the most important error scenarios.
Paradise 🌅.
Now you know how to wrap your domain's dependencies in a Service
, and you're happy that it's the service that gets injected into our handlers in AppState
, let's check back in on main
.
main
is for bootstrapping
The only responsibilities of your main
function are to bring your application online and clean up once it's done.
Some developers delegate bootstrapping to a setup
function that does the hard work and passes the result back to main
, which just decides how to exit. This works too, and the differences don't matter for this discussion.
main
must construct the Service
s required by the application, inject them into our handlers, and set the whole program in motion:
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let config = Config::from_env()?;
// A minimal tracing middleware for request logging.
tracing_subscriber::fmt::init();
let sqlite = Sqlite::new(&config.database_url).await?;
38
let metrics = Prometheus::new();
let email_client = EmailClient::new();
let author_service = Service::new(sqlite, metrics, email_client);
39
let server_config = HttpServerConfig {
port: &config.server_port,
};
let http_server = HttpServer::new(author_service, server_config).await?;
40
http_server.run().await
}
To do this, main
needs to know which adapters to slot into the domain's ports 38
. This example uses an SQLite AuthorRepository
, Prometheus AuthorMetrics
and an email-based AuthorNotifier
.
It combines these implementations of the domain traits into an AuthorService
39
using the author domain's Service
constructor.
Finishing up, it injects the AuthorService
into an HTTP server and runs it 40
.
Even though main
knows which adapters we want to use, we still aim to not leak implementation details of third-party crates. Here are the use statements for this main.rs
file:
use hexarch::config::Config;
use hexarch::domain::author::service::Service;
use hexarch::inbound::http::{HttpServer, HttpServerConfig};
use hexarch::outbound::email_client::EmailClient;
use hexarch::outbound::prometheus::Prometheus;
use hexarch::outbound::sqlite::Sqlite;
This is all proprietary to our application. Even though we're using an axum HTTP server, main
doesn't know about axum.
Instead, we've created our own HttpServer
wrapper around axum that exposes only the functionality the rest of the application needs.
Configuration of routes, ports, timeouts, etc. lives in a predictable place isolated from unrelated code. If axum were to make changes to its API, we'd need to update our HttpServer
internals, but they'd be invisible to main
.
There's another motivating factor behind this: main
is pretty resistant to unit testing. It composes unmockable dependencies and handles errors by logging to stdout and exiting. The less code we put here, the smaller this testing dead zone.
Setup and configuration for integration tests is often subtly different from main
, too. Imagine having to configure all the routes and middleware for an axum server separately for main
and tests. What a chore!
By defining our own HttpServer
type, both main
and tests can easily spin up our app's server with the config they require. No duplication.
Why hexagons?
Ok, it's time. It's actually happening.
I've shown you the key, practical components of hexagonal architecture: services, ports, adapters, and the encapsulation of third-party dependencies.
Now some theory – why hexagons?
Well, I hate to break it to you, but hexagons aren't special. There's no six-sided significance to hexagonal architecture. The truth is, any polygon will do.
Hexagonal architecture was originally proposed by Alistair Cockburn, who chose hexagons to represent the way adapters surround the business domain at the core of the application. The symmetry of hexagons also reflects the duality of inbound and outbound adapters.
I've been holding off on a classic hexagonal architecture diagram until I showed you how the ports and adapters compose. Here you go:
The outside world is a scary, ever-changing place. Anything can go wrong at any time.
Your domain logic, on the other hand, is a calm and tranquil glade. It changes if, and only if, the requirements of your business change.
The adapters are the bouncers enforcing the domain's dress code on anything from the outside that wants to get in.
How to choose the right domain boundaries
What belongs in a domain? What models and ports should it include? How many domains should a single application have?
These are the questions many people struggle with when adopting hexagonal architecture, or domain-driven design more generally.
I've got good news and bad news 💁.
The bad news is that I can't answer these questions for you, because they depend heavily on variables like your scale, your overall system architecture and your requirements around synchronicity.
The good news is, I have two powerful rules of thumb to help you make the right decision, and we'll go through some examples together.
Firstly, a domain represents some tangible arm of your business.
I've been discussing an "author domain", because a using single-entity domain makes it easier to teach the concepts of hexagonal architecture.
For a small blogging app, however, it's likely that a single "blog domain" would be the correct boundary to draw, since there is only one business concern – running a blog.
For a site like Medium, there would be multiple domains: blogging, user identity, billing, customer support, and so on. These are related but distinct business functions, that communicate using each other's Service
APIs.
If this is starting to sound like microservices to you, you're not imagining things. We'll talk about the relationship between hexagonal architecture and microservices in part four.
Secondly, a domain should include all entities that must change together as part of a single, atomic operation.
Consider our blogging app. The author domain manages the lifecycle of an Author
. But what about blog posts?
If an Author
is deleted, do we require that all of their posts are deleted atomically, or is it acceptable for their posts to be accessible for a short time after the deletion of the author?
In the first case, authors and posts _must_ be part of the same domain, since the deletion of an author is atomic with the deletion of their blog posts.
In the second case, authors and posts could theoretically be represented as separate domains, which communicate to coordinate deletion events.
This communication could be synchronous (the author domain calls and awaits PostService::
) or asynchronous (the author domain pushes some AuthorDeletionEvent
onto a message queue, for the post domain to process later).
Neither of these cases are atomic. Business logic, being unaware of repository implementation details, has no concept of transactions in the SQL sense.
If you find that you're leaking transactions into your business logic to perform cross-domain operations atomically, your domain boundaries are wrong. Cross-domain operations are never atomic. These entities should be part of the same domain.
Start with large domains
According to the first rule of thumb, we wouldn't actually want to separate authors and posts into different domains. They're part of the same business function, and cross-domain communication complicates your application. It has to be worth the cost.
We're happy to pay this cost when different parts of our business communicate, but need to change often and independently. We don't want these domains to be tightly coupled.
Identifying these related but independent components is an ongoing, iterative process based on the friction you experience as your application grows.
This is why starting with a single, large domain is preferable to designing many small ones upfront.
If you jump the gun and build a fragmented system before you have first-hand experience of the points of friction in both the system and the business, you pay a huge penalty.
You must write and maintain the glue code for inter-domain communication before you know if it's needed. You sacrifice atomicity which you might later find you need. You will have to undo this work and merge domains when your first guess at domain boundaries is inevitably wrong.
A fat domain makes no assumptions about how different business functions will evolve over time. It can be decomposed as the need arises, and maintains all the benefits of easy atomicity until that time comes.
Authentication and authorization with hexagonal architecture
I was deliberate in choosing Author
s rather than User
s for our example application. If you're used to working on smaller, monolithic apps, it's not obvious where entities like User
s belong in hexagonal architecture.
If you're comfortable in a microservices context, you'll have an easier time.
The primary entity for authentication and authorization will "own" many other entities. A User
for a social network will own one or more Profile
s, Settings
, Subscription
s, AccountStatus
es, and so on. All of these, and all the data that they own in turn, are traceable back to the User
.
If we follow our rule of thumb – that entities that change together atomically belong in the same domain – the presence of these root entities causes everything to belong in the same domain. If you delete a User
, and require synchronous deletion of all owned entities, everything must be deleted in the same, atomic operation.
Isn't this the same as having no domain boundaries at all?
For a small to medium application, where you roll your own auth and aren't expecting massive growth, this is fine. Overly granular domains will cause you more problems than they solve.
However, for larger applications and apps that use third-party auth providers like Auth0, a single domain is unworkable.
The User
entity and associated auth code should live in its own domain, with entities in other domains retaining a unique reference to the owner. Then, depending on your level of scale, deletions can happen in two ways:
- Synchronously, with the auth domain calling each of the other domains'
Service::
methods.delete_ by_ user_ id - Asynchronously, where the auth domain publishes a deletion event for other domains to process on their own time.
Neither of these scenarios is atomic.
Regardless of what architecture you use, atomic deletion of a User
and all their dependent records get taken off the table at a certain scale. Hexagonal architecture just makes this explicit.
To use an extreme example, deletion of a Facebook account, including everything it has ever posted, takes up to 90 days (including a 30-day grace period). Even then, copies may remain in backups.
In addition to the vast volume of data to be processed, there will be a huge amount of internal and regulatory process to follow in the course of this deletion. This can't be modeled as an atomic operation.
A Rust project template for hexagonal architecture
Until now, I've avoided discussing file structures because I didn't want to distract from the core concepts of domains, ports and adapters.
Now you have the full picture, I'll let you explore this example repository at your leisure and take inspiration from its folder structure.
Branch 3-simple-service
contains the code we've discussed in part three of this guide, and provides a basic but representative example of a hexagonal app.
Dividing src/lib
into domain (for business logic) inbound
and outbound
(for adapters) has worked well at scale for several teams I've been part of. It's not sacred though. How you name these modules and the level of granularity you choose should suit your own needs.
All I ask is that, whatever convention you adopt, you document it for both new and existing team members to refer to.
Document your decisions, I beg you.
Is hexagonal architecture right for me?
Okay, that was a lot to take in. I'm going to give you a week to digest it – feel free to ask questions in the comments!
In part four, we'll be discussing the trade-offs of using hexagonal architecture compared with other common architectures, and see how it simplifies the jump to microservices when the time is right.
Trade-offs of hexagonal architecture in Rust
Part four is coming soon.
Advanced techniques in hexagonal architecture
In keeping with tradition, part five will be released after part four.