![]() ![]() ![]() When the above SQL query is executed, the output returned is pretty much exactly what we want: depth SELECT depth, from_id as parent, to_id as child SELECT 0, from_id, relation, to_id FROM relations WHERE from_id = 'root_org_id ' AND relation = 'is_superior_org ' UNION SELECT WITH RECURSIVE tree(depth, from_id, relation, to_id) AS ( I ended up with something like:ĬTEs are detailed and documented quite thoroughly, so it was really easy to come up with an initial CTE that would do what we wanted: Writing the query in SQL was pretty simple since there are plenty of tutorials on recursive CTEs. This essentially reduce our multiple SQL queries into a single one following the mantra of letting your database do the heavy lifting □ These expressions act somewhat like closures in that they allow us to run 'functions' on temporary results inside an existing SQL query, allowing you to further query based on the result of the previous. Building a solutionĪfter doing some research, we discovered that SQL has a mechanism which allows us to recursively query data called Common Table Expressions (CTEs). As our hierarchies grew large, the number of queries this would end up doing would increase exponentially. This implementation of traverse_hierarchy/1 essentially finds children via a depth first search, which is exhaustive and correct, simple and small however it also performs N+1 queries because we'd be making a call to get the children for the current node we were working on, and N calls to get the children of each child node. The original implementation looked a little like this:ĭef traverse_hierarchy ( organization_id ) do end It was built for simplicity with not much special focus on optimality however so after using this service in production for 14 months we started to notice that for particularly large organization hierarchies queries would end up taking quite awhile.Ī common query we performed was to get a list of all organizations, and recursively get all of the subordinate organizations for those organizations. Using this structure to store relations, we're able to store arbitrary relations between arbitrary entities, allowing us to do complex queries like getting all entities in all organizations that can be managed by an admin at a given tier of this hierarchy-in the example above, admin_1 would be able to manage all assets of all organizations and sub-organizations of org_1.ĭuring the MVP phase of the project, this microservice was thrown together pretty quickly and generally it worked well. We modelled this with the following database schema: from_id Organizations can be arbitrarily deeply nested.Admins in an organization have access to resources belonging to that organization, or any subordinate organizations.One organization can own many resources.One organization can have many subordinate organizations.Problem Overview & Backgroundįor the purposes of this post, we'll simplify greatly and use the following description of the system: ![]() In one of the projects I've worked on in the past, we were tasked to implement an attribute based access control microservice for managing permissions for different types of entities across different resources shared between different organizations. ecto.migrate does everything so we have the greatest compatibility with buildpacks, deployment tooling, etc.Querying Recursive Data Structures with SQL and Ecto I suppose I could do ecto.load, but I’d really like a straight migration path if possible (I.e. I’m guessing execute is only meant for single statements, not a full SQL script? Is that correct? If so, how can I run this script as part of my first migration? ** (Postgrex.Error) ERROR 42601 (syntax_error) cannot insert multiple commands into a prepared statement My SQL loads fine if I import it manually, but this migration gives me: Right now I’m doing: defmodule doĮxecute "CREATE EXTENSION IF NOT EXISTS \"pgcrypto\"" Right now I have a schema-only dump, and eventually I’d like to take a production dump of our data as well and modify migration 0 to massage our incoming data, after which point we’ll use migrations both in development and in production. My goal is to load our existing database dump in as migration 0, then to use migrations going forward. ![]() Trying to do something similar and am wondering if anyone has any pointers?Įssentially, we’re migrating another project with an existing Postgres database to Phoenix. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |