If you worked on long lived applications before, think "enterprisey" web apps, then you probably thought to yourself, at least once the following: "This sucks! How did we end up with such a crappy API thats just manipulating our data back and forth from our database?"
Truth be told, I've said something along those lines a couple of times before. It was when I was dropped into a legacy application with 4 versions of the same CRUD endpoints which had gone though at least 3 contracting companies, with nobody on the team willing to state with certainty what the differences were. A couple of weeks in I moved on, but I'll never forget there was so many conflicting options and filters between the endpoints, I ended up writing a script to run through a combinatorial set of options, saving the output and diffing the results so I could see what affected what.
I also must admit that long ago I too, was guilty of building up a few disastrously complex API's. To the teammates I serendipitously handed those API's onto... I'm so sorry.
However, after accumulating enough painful experiences working on and building sucky API's, I sort of carved out what it takes to build API's which are actually fun useful. Being about 15 years since I first was paid to code, I like to think it's been at least half that since I unknowingly wrote disastrously bad code. Now-days I only knowingly write bad code as necessary. And although I'm always learning, I think there has been enough time for me to reflect and collect my thoughts on the bad and good API's I've built.
To help others avoid some of the mistakes I've made, I'll try to put down how I like to structure API's in the sense that they:
- Are useful for whatever product your building.
- Are easy to reason about when developing with them.
- And keep said usefulness and development ease over time.
Getting concrete, I will list out the why and how for the following statement: We do not want to surface as API endpoint the operations usually associated with manipulating data. That is, we do not need Create, Read, Update, and Delete for your applications objects, colloquy known as CRUD. Instead we should collapse them all into just two statements:
- Find
- Upsert
I like to remember this approach by a silly acronym, FU đ.
Whats Next
Now before I get into the details, this is what is to come:
- First, I'll chip away at the status quo of CRUD through misnomer. e.g. I'll show how we can avoid the Delete from our API's.
- Some continuity will be provided for those accustomed to CRUD by seeing how we can map it down to simply FU.
- Then, I'll give some case examples which further demonstrate the complete picture, e.g. how I came to this conclusion through real world experience.
Subtraction
First we begin by subtraction and why we don't need the weighty Delete in CRUD. Why not, you might ask?
Because most often we are not actually going about deleting things. Storage is cheap. However time, inference and data collect are not. So on average, we want to keep data around once we have it. For as long as legally possible of course.
So instead of blowing away a complete database row, or nulling out columns, we often are simply just changing a single deleted
column, or some other flag someplace.
Even the few data purging systems, a la GDPR compliance services, are not true deletes. It's more compaction and transform to another form that is inactive/inert to specific entities. E.g. redaction or user data export. So again, we don't really ever delete, its all just changes upon changes. (or if your into event sourcing, appends upon appends). So lets move on and just say delete is another name for update.
Upsert
Now that have mapped delete to update, lets look at both Update & Create at the same time for these two can be combined into a singular call, if not exists, Create, otherwise Update. Colloquial referred to as an upsert.
Upsert often has direct support in certain databases, see postgres's ON CONFLICT
, or Cassandra's documented ON UPDATE
. I'm going to clarify my usage here, and define it to mean: "If a condition is met, we create, otherwise we are to update." So based on a condition its one or the other. And if you remember back to delete, we said it was just a changed flag. So again, its all changes upon conditional changes.
So we started with Create, Update and Delete, ands have now mapped them all to just the single Upsert.
Find
By process of elimination, we have arrived at the last character remaining from the premise that we don't need CRUD, R, Read. Which of course is now going to be mapped to Find. Which seems arbitrary to a certain degree, but I find the semantics of read to be limiting.
With Read, its implying the data is just there, an open book, no work required. However, with Find we are implying that a single interface provides and does all the work needed to find specific data. Whether it be slicing and dicing with sql where
clause, or filtering down by calling another service, or some other means, find covers more of what real application do when an API needs to provide access to data.
I will note, that one could also call this searching. And you probably could swap out Upsert for Search in this text. However, and I know this to be childish, I don't find the acronym of SU as appealing as FU.
Case Examples
Job Requisitions
We had an interface to query for job requisitions. Job requisitions were defined as the rows of data generated by hiring managers as they fill out a form where they add details on the given role needing to be hired. E.g. they would enter job titles, location, job categories, descriptions, competencies, details on their team, and a whole slew of other unique information.
Our first go of an interface for this data came about from the simple requirement of just showing the form after submit. So basically an interface to read(form_id) => form_data
. Then we needed to surface per author requisitions. Not long after we added states to requisitions, pending, approved,...
and again, we had to filter on them. So what started out as a simple where clause, became ever more complex. Not to mention business rules started to come into play which were difficult to satisfy with sql alone. e.g. we need requisitions with specific competencies(in another service), and which were already used by the given user, but not the impersonated user, and when those competencies are on a draft, that requisition should not show up in results when it's text contains the outdated tag line...
So what we ended up with was not a read anymore, it was an eclectic collection of various reads which had to be joined together as use cases overlapped or converged.
On the whole, our code to read requisitions was properly contained from the business logic, just as many best practices suggest. Never-the-less, problems arrived when we mapped our read_xxx
endpoints down to the business complexity on a 1 to 1 basis. Our read endpoints grew into a tangled and hard to modify mess of different read calls, separately used across different use cases. In other words, we didn't land the right abstraction.
In GraphQL interface form, we started and were inundated with the following: (Do keep in mind these were not all present at the same time. I'm only listing them out chronologically for demonstrative purposes.)
type Query {
# at the start
read(formID: Int!): [Req]
# then a week in we augmented the original read with an optional external id.
read(formID: Int!, externalID: String): [Req]
# Another week after we had to add...
read_by_form_status(formID: Int!, status: String): [req]
# and then...
read_by_hiriningmanager_id_or_status(hmID: Int, status: String): [req]
# and ...
read_for_weird_business_use_case(id: Int): [req]
# ...
}
As you can imagine, this API became a not so nice experience as time progressed for new devs joining as well as existing devs. We had a combinatorial amount of ways to get to similar data, duplicated code galore and reimplementation of often used clauses and workarounds of edge cases between each find call. These edge cases were disparate and had varying assumptions that could not be assumed to be the same between calls, even if they took the same id's.
Find the light
What we needed was an API that could allow us to map from 1 endpoint to the N business use cases we had to satisfy.
Instead of the collection of reads, we should have begun with a robust and ready to grow find
interface which would have allowed us to adapt to the growing business logic as needed.
From that experience we learned what worked and what didn't. We eventually did the hard work bringing back together the shared assumptions in each call and making each read consistent. Ultimately we refactored down all of the reads into a single find
query.
type Query {
find(
formIDs: [Int]
hmIDs: [Int]
statuses: [String]
externalIDs: [String]
# ...
) :[reqs]
}
This single API call encompassed the varying approaches needed to find different requisitions. Of course complexity was still within the implementation, for we couldn't avoid actually dealing the edge cases, but at least we could reason about the assumptions in a single location, keeping low the reasoning required when building the complex and useful user experiences.
Takeaways
I started with the common CRUD conventions and through a process of elimination, showed how to instead use Find and Upsert as the API for an applications persistence layer. A graphQL case example was used along the way to demonstrate the usefulness of such a convention.
I hope now you can imagine using Find and Upsert for your next API and have the confidence if you had to hand if off, the next devs would praise not curse, at what you built.
Notes:
- Is this just Commandâquery separation? Not really as that is a far more complex idea, involving way more than just persistence layers.