Giving LLM agents direct, autonomous access to a real production databases with write access seems insane to me.
NO ONE, agent or human, should have direct write access to production databases outside of emergency break glass scenarios. This is why we have stored routines and API layers to pre-define what writes are allowed. The facts that agents CAN autonomously write to a database does not imply that they should.
For the point about query optimization, again your agents should not be issuing random queries against a production database. We have had the concept of separate analytics databases with different architectures to support exporatory queries for decades.
I agree and hope this is the case for anything serious enough. I also don't see this changing any time soon.
There are ways to give safe access to the data, at least read-only, that don't involve production risk and don't sacrifice privacy. For example, database branches with anonymization. Instead of accessing the prod/staging db, the agent creates a branch and has read/write access to that.
(disclaimer: I work at Xata, where we offer copy-on-write branches for Postgres, and the agent use-cases are the most popular right now)
I’m a DBRE. I spend a good portion of my day with a shell into one or more prod databases. The schema definitions in code are scattered between ORM model definitions, Alembic migrations, and Liquibase migrations, so the only reliable way I have of understanding a schema as it exists is to view it. Plus, I am very comfortable with SQL, and the various system catalogs of both MySQL and Postgres, so it’s a ton easier to work with.
Truly sensitive customer information is encrypted, and on an isolated DB cluster that no one has regular access to. I also operate with a read-only grant, because manual writes to a prod DB is generally a terrible idea.
My guess is that if the database is subject to auditing then LLM access (obviously writes in particular, but even reads come with exfiltration risks) will be a hard "no" and instant red flag. When it's a person, there is a sense of accountability and opportunity for remediation.
I suppose that LLMs will be treated as a code artifact and liability will shift upstream towards who deployed/approved the access in the first place. Even though code is essentially deterministic, making that association fairly simple, it's going to boil down to this same paradigm.
Perhaps governance rules will evolve to even explicitly forbid it, but my gut feeling is that for what the future determines to be "practical" reasons (right or wrong) LLMs will warrant an entirely new set of rules to allow them to be in the chain at all.
+ EDIT: both my wife and I have experience in this area and the current answer is companies like KPMG don't have an answer yet. Existing rules do help (e.g. there better be good documented reasons why it was used and that access was appropriately scoped, etc), but there is enough ambiguity around these tools so they say "stay tuned, and take caution".
LLM agents are unlocking demand and supply for applications that wouldn't have been possible before due to time constraints though. There's a growing demand for single user or smaller scoped apps where giving LLM agents direct access means velocity. The failure/rollback model is much easier with these as long as we have good backup hygiene.
> There's a growing demand for single user or smaller scoped apps where giving LLM agents direct access means velocity. The failure/rollback model is much easier with these as long as we have good backup hygiene.
This makes no sense to me. For anything that has sensitive payment or personally identifieable data, direct access to DB is potentially illegal.
> The failure/rollback model is much easier with these as long as we have good backup hygiene.
Have you actually operated systems like this in production? Even reverting to a DB state that is only seconds old can still lose hundreds or thousands of transactions. Which means loads of unhappy customers. More realistically, recovery points are often minutes or hours behind once you factor in detection, validation and operational overhead.
DB revert is for exceptional disaster recovery scenarios, not something you want in normal day-to-day operations. If you are saying that you want to give LLM full access to prod DB and then revert every time it makes a mistake, you aren't running a serious business.
You are thinking way too hard. This person is a hazard that needs to learn the hard way.
If velocity means letting agents live edit a db, I'm fine being slow. Holy hell. Let these people crash and burn but definitely let me know the app name so I know never to use it first.
This narrative seems to come from people who haven't worked on meaningfully complex software systems. They're more like script kiddies than software developers. I don't mean that in a derogatory manner. They're right that LLMs are unlocking new possibilities in the realm of their work. They just don't realize that these new possibilities are constrained to relatively simple applications, or very thin slices of complex systems.
I use an LLM to access my database occasionally, but never in production and never with write access. It is genuinely useful. It would never be useful in a production setting, though.
It's worth noting too that people should be wary of what a read only user means in database land. There are plenty of foot guns where writes can occur with read-like statements, and depending on the schema, maybe this would be a rollback-worthy situation. You really need to understand your database and schema before allowing an LLM anywhere near it, and you should be reviewing every query.
But are those users allowed to see all the data in the databawe by the law? Some privacy laws require that personal information must be hidden from employees unless they have a narrow and specific business reason to view it. Blanket full access to a database may be illegal for that reason.
If you're just vibe coding a tool for yourself, you don't have 'production database' at all even if you use database technology for storage. Just like many Android apps use local sqlite DBs but they're not production databases.
Of course in this case no traditional wisdom about production databases matters to you. In other words, it's off-topic.
It's not news that if you just give all developers at a company write access to the production databases, owner permissions on all resources, etc. that velocity can be increased. But at what cost?
The reason we don't do that in most cases is that "move fast and break things" only makes sense for trivial, non-critical applications that don't have any real importance, like Facebook.
I think the argument would be mostly about the companies where such trivialities like proper auth were given up to maximum possible extent. I'm sure even some bigger ones are only gnashing their teeth over implementing security measures that are required by law and not seeing much point to it.
I’m all-in on agents but this is a “you’re holding it wrong” situation.
If you want to give your agents a DB for their own work as a scratchpad or something that’s great. They can not only go to town, but also analyze their own work and iterate on it.
If you are talking about a production base, agents should not be hitting it directly under any circumstances. There needs to be an API layer with defined usage patterns, rate limits, etc.
This is basically the same as saying “databases weren’t designed for interns to run live inline migrations in prod”. Yeah of course they aren’t.
I would hope that you're running this on a replica so that the massive table scan doesn't choke writes to the main db. Even then it's possible to bring the replica down and depending on the technology still create a problem (WAL backup for instance)
Another way to bring prod down even with read is depending on your atomicity settings, try starting a transaction and don’t commit or abort it, just leave it dangling. That’s a cute one
so what if there were some numbers in the report which are in actuality, an order of magnitude or two outside of what you think is reasonable, because something was wrong, but the AI agent reports something that looks normal?
This article has all the correct conclusions and solutions based on one assumption that doesn’t have any hold in reality - that someone would be insane enough to allow direct DB access to an AI agent.
I totally agree on investing in a sane data model upfront. So many production systems have schemas that only made sense to the engineer that created them. I would be delighted if I can read a schema and understand what a column means without having to dig through a bunch of migration PRs.
I recently encountered `is_as BOOL` in an important table. After way too much invested time we found out it meant "is active service". </DDL rant>
I integrate with many ERPs and this is the bane of my existence.
One of the worst has field names like `ft_0001...N` and table names like `UNCC_00001...N`, all in `text` fields (even numbers!), zero FK, almost no indexes and what are views?
The other has this funny field that is a blob that need decoding using a specific FreePascal version. The field? Where is the price of the product.
Other has, in the same column, mix of how handling "," or "." for numbers and
I need to check the digital places to deduce which.
FUN.
P.D: I normalize all this Erps into my own schema and has get praise for things like, my product table is called products.
I think the best db schema I had the displeasure of working with was one where it was a requirement that every table and column name NOT have vowels, except for the few that could, and "the few that could" were governed entirely by a spreadsheet owned by the DB admin.
And so you got tables like LANDMRK and columns like RCR_RCRDR.
I work with an Oracle database like this. In the old days, there was a 30 character limit on column names, so you end up with conventions like no vowels. The limit no longer exists today, but the DBA continues to enforce the limit on new columns.
I never got an answer when I asked. This same government agency also got extremely mad when our dev manager upgraded the ASP.NET version for one project because it had some really useful features we were developing with. They deleted his permissions to deploy to production from there until the end of time, requiring us to email someone each time we wanted to update the application. It was great.
The article describes idempotency keys and then completely misses making them the PK. The example is already using UUIDv4 as a PK, so they’re clearly not optimizing for performance. If you’re using the first 32 characters of a SHA256 hash, congratulations, store it as BINARY(8) / BYTEA - it’s even half the size of an encoded UUID, to boot.
Also, the DB will most certainly not silently ignore a unique constraint violation: it will send an error back.
I'm not sure why you are giving your agents write access to query your OLTP database, let alone write to it. The pattern that I use at work is:
- Read access through OLAP, not OLTP. You just need to setup a near real-time replication between your OLTP and OLAP.
- Write access through API, just like your application. You can add fancy things like approval layer, e.g. you agent cannot "ban_user(id)", but it can "request_to_ban_user(id)", and the action only happens once you approve it.
There are two broad types of databases: operational and analytical.
Operational databases store transactions and support day-to-day application workflows.
For analysis, data is often copied into separate analytical databases (data warehouses), which are structured for efficient querying and large-scale data processing. These systems are designed to handle complex, random queries and heavy workloads.
LLM agents are the best way to analyze data stored in these databases. This is the future.
Based on my experience with Claude, it's pretty damn good at doing data analysis, if given the right curated data models. You still need to eyeball the generated SQL to make sure it makes sense.
> and how?
1. Replicate your Postgres into Snowflake/Databricks/ClickHouse/etc, or directly to Iceberg and hook it up to Snowflake/Databricks/ClickHouse/etc.
2. Give your agent read access to query it.
3. Build dimensional models (facts and dimensions tables) from the raw data. You can ask LLM for help here, Claude is pretty good at designing data models in my experience.
4. Start asking your agent questions about your data.
Keep steps 3-4 as a tight feedback loop. Every time your agent hallucinates or struggle to answer your questions, improve the model.
Side note: I'm currently building a platform that does all 3 (though you still need to do 2 yourself), you just need Postgres + 1 command to set it up: https://polynya.dev/
> Claude is pretty good at designing data models in my experience
Yesterday, Claude decided to go with nvarchar(100) for an IP address column instead of varbinary(16), and thinks RBAR triggers are just-as-good as temporal tables.
So, no. Claude is not good at designing data models in my experience.
(yes, the recommended way would be to simply grant only SELECT,INSERT,UPDATE but if I were the DBA here I would definitely put in place an explicit deny)
Databricks introduced Lakebase exactly for this purpose. Lakebase supports branching that makes querying, schema evolution and writes cheap. Each agent can run its own branch without affecting production data.
> The contract goes something like this: the caller is a human-authored application, running deterministic code, issuing predictable queries, reviewed by a developer before deployment. Writes are intentional. Connections are brief. When something goes wrong, a human notices. The database can be dumb and fast because the application layer is smart and careful.
This assumption is that of a non-DBA who happened to get a hold of a database.
When you have sufficient users, your expected set of queries is a complete shit show. Some will be efficient, many will be poorly tested and psychotic, and indistinguishable from a non-deterministic LLM.
Also who said humans can’t query the database directly in prod? If not direct sql access, business users have the next best thing with custom reports and such. And they will very much ask for ridiculous amounts of computation to answer trivial questions.
It was a foundational assumption of SQL that business users would directly access the database and write their own queries.
It’s why row level access and permissions exist. Use them
This doesn't make sense, in the context of the author's chosen example (postgres). Postgres connections are very heavy and there is a huge performance penalty for cycling them quickly, and a whole range of silly workarounds for this fact (pgbouncer). Maybe the author meant to say that sessions are brief.
I think the spirit of this post has merit, but the premise is flawed. ORMs have been causing this same class of problem for decades. Furthermore, It's not at all uncommon for humans to create different queries for the same result and for them to follow different review paths for the same underlying database.
A query created by a human and reviewed by at least 1 other human becomes static after it's merged. But the query from an LLM is dynamic, it can change between two calls in the same session if the LLm sees a reason to change it, and there is no review pipeline and QA stage.
think of a flat database table as a projection (of all previous SQL queries), should you give access to it to agents?
probably not, maybe only for analytical (OLAP) purposes in read-only mode.
for transactional OLTP loads, it is better to use Kafka style durable queues, have agents create a change record to mutate the state, but not the projection itself, which could be recomputed at arbitrary point in time via time-travel mechanism, could be branched out into different versions, etc
Who the hell let agents directly use a database? Even humans don’t get this privilege. So, of all things, we forgot how to write APIs now? The article suggests creating a role for the agent directly in the database. What is wrong with you people? The very title of the article defeats its own purpose. They are not designed for this so don’t let them be used like this, ffs.
> None of this requires new technology. It requires treating the database as a defensive layer that assumes the caller might be wrong, might retry, and might not be watching the results.
This is one of those takes that is so close to understanding the problem, and then drawing an insane conclusion.
The problem is that AI agents and the code they output is untrustworthy, buggy, insecure, and lacking in any of the standards the industry has developed over the last 30 years. The solution to this is "don't use AI agents", not "change the rest of the stack to accommodate garbage".
I'm exasperated whenever I read articles like this. Anyone who underscores the difference between humans and agents by saying "[agents] write based on their current understanding of the task, which may be wrong" is clearly working with a different species of human than the one I've worked with.
NO ONE, agent or human, should have direct write access to production databases outside of emergency break glass scenarios. This is why we have stored routines and API layers to pre-define what writes are allowed. The facts that agents CAN autonomously write to a database does not imply that they should.
For the point about query optimization, again your agents should not be issuing random queries against a production database. We have had the concept of separate analytics databases with different architectures to support exporatory queries for decades.
There are ways to give safe access to the data, at least read-only, that don't involve production risk and don't sacrifice privacy. For example, database branches with anonymization. Instead of accessing the prod/staging db, the agent creates a branch and has read/write access to that.
(disclaimer: I work at Xata, where we offer copy-on-write branches for Postgres, and the agent use-cases are the most popular right now)
Truly sensitive customer information is encrypted, and on an isolated DB cluster that no one has regular access to. I also operate with a read-only grant, because manual writes to a prod DB is generally a terrible idea.
I suppose that LLMs will be treated as a code artifact and liability will shift upstream towards who deployed/approved the access in the first place. Even though code is essentially deterministic, making that association fairly simple, it's going to boil down to this same paradigm.
Perhaps governance rules will evolve to even explicitly forbid it, but my gut feeling is that for what the future determines to be "practical" reasons (right or wrong) LLMs will warrant an entirely new set of rules to allow them to be in the chain at all.
+ EDIT: both my wife and I have experience in this area and the current answer is companies like KPMG don't have an answer yet. Existing rules do help (e.g. there better be good documented reasons why it was used and that access was appropriately scoped, etc), but there is enough ambiguity around these tools so they say "stay tuned, and take caution".
This makes no sense to me. For anything that has sensitive payment or personally identifieable data, direct access to DB is potentially illegal.
> The failure/rollback model is much easier with these as long as we have good backup hygiene.
Have you actually operated systems like this in production? Even reverting to a DB state that is only seconds old can still lose hundreds or thousands of transactions. Which means loads of unhappy customers. More realistically, recovery points are often minutes or hours behind once you factor in detection, validation and operational overhead.
DB revert is for exceptional disaster recovery scenarios, not something you want in normal day-to-day operations. If you are saying that you want to give LLM full access to prod DB and then revert every time it makes a mistake, you aren't running a serious business.
If velocity means letting agents live edit a db, I'm fine being slow. Holy hell. Let these people crash and burn but definitely let me know the app name so I know never to use it first.
I use an LLM to access my database occasionally, but never in production and never with write access. It is genuinely useful. It would never be useful in a production setting, though.
It's worth noting too that people should be wary of what a read only user means in database land. There are plenty of foot guns where writes can occur with read-like statements, and depending on the schema, maybe this would be a rollback-worthy situation. You really need to understand your database and schema before allowing an LLM anywhere near it, and you should be reviewing every query.
If you're just vibe coding a tool for yourself, you don't have 'production database' at all even if you use database technology for storage. Just like many Android apps use local sqlite DBs but they're not production databases.
Of course in this case no traditional wisdom about production databases matters to you. In other words, it's off-topic.
2) In regards to having good backup hygiene, who is we?
It's not news that if you just give all developers at a company write access to the production databases, owner permissions on all resources, etc. that velocity can be increased. But at what cost?
The reason we don't do that in most cases is that "move fast and break things" only makes sense for trivial, non-critical applications that don't have any real importance, like Facebook.
If you want to give your agents a DB for their own work as a scratchpad or something that’s great. They can not only go to town, but also analyze their own work and iterate on it.
If you are talking about a production base, agents should not be hitting it directly under any circumstances. There needs to be an API layer with defined usage patterns, rate limits, etc.
This is basically the same as saying “databases weren’t designed for interns to run live inline migrations in prod”. Yeah of course they aren’t.
I work with a team that does stuff like this, returning a 200 and a body containing "error: I didn't do what you said because _insert error here_"
The problem is that you returned OK instead of ERROR when things were not OK and there was an ERROR.
Its a design that smells of teams trying to hit some kind of internal metrics by slightly deceptive means.
Executives who wouldn’t take the time to build a report are happy to ask an AI agent to do so.
Another way to bring prod down even with read is depending on your atomicity settings, try starting a transaction and don’t commit or abort it, just leave it dangling. That’s a cute one
I recently encountered `is_as BOOL` in an important table. After way too much invested time we found out it meant "is active service". </DDL rant>
One of the worst has field names like `ft_0001...N` and table names like `UNCC_00001...N`, all in `text` fields (even numbers!), zero FK, almost no indexes and what are views?
The other has this funny field that is a blob that need decoding using a specific FreePascal version. The field? Where is the price of the product.
Other has, in the same column, mix of how handling "," or "." for numbers and I need to check the digital places to deduce which.
FUN.
P.D: I normalize all this Erps into my own schema and has get praise for things like, my product table is called products.
And so you got tables like LANDMRK and columns like RCR_RCRDR.
Sounds like a table designed by Forrest Gump.
Also, the DB will most certainly not silently ignore a unique constraint violation: it will send an error back.
- Read access through OLAP, not OLTP. You just need to setup a near real-time replication between your OLTP and OLAP.
- Write access through API, just like your application. You can add fancy things like approval layer, e.g. you agent cannot "ban_user(id)", but it can "request_to_ban_user(id)", and the action only happens once you approve it.
Before redesigning your database, consider seeing a psychiatrist.
Operational databases store transactions and support day-to-day application workflows.
For analysis, data is often copied into separate analytical databases (data warehouses), which are structured for efficient querying and large-scale data processing. These systems are designed to handle complex, random queries and heavy workloads.
LLM agents are the best way to analyze data stored in these databases. This is the future.
Why, and how?
Based on my experience with Claude, it's pretty damn good at doing data analysis, if given the right curated data models. You still need to eyeball the generated SQL to make sure it makes sense.
> and how?
1. Replicate your Postgres into Snowflake/Databricks/ClickHouse/etc, or directly to Iceberg and hook it up to Snowflake/Databricks/ClickHouse/etc.
2. Give your agent read access to query it.
3. Build dimensional models (facts and dimensions tables) from the raw data. You can ask LLM for help here, Claude is pretty good at designing data models in my experience.
4. Start asking your agent questions about your data.
Keep steps 3-4 as a tight feedback loop. Every time your agent hallucinates or struggle to answer your questions, improve the model.
Side note: I'm currently building a platform that does all 3 (though you still need to do 2 yourself), you just need Postgres + 1 command to set it up: https://polynya.dev/
Yesterday, Claude decided to go with nvarchar(100) for an IP address column instead of varbinary(16), and thinks RBAR triggers are just-as-good as temporal tables.
So, no. Claude is not good at designing data models in my experience.
How you even enforce it ?
And why you are even giving agent access to live DB in the first place ?
This assumption is that of a non-DBA who happened to get a hold of a database.
When you have sufficient users, your expected set of queries is a complete shit show. Some will be efficient, many will be poorly tested and psychotic, and indistinguishable from a non-deterministic LLM.
Also who said humans can’t query the database directly in prod? If not direct sql access, business users have the next best thing with custom reports and such. And they will very much ask for ridiculous amounts of computation to answer trivial questions.
It was a foundational assumption of SQL that business users would directly access the database and write their own queries.
It’s why row level access and permissions exist. Use them
This doesn't make sense, in the context of the author's chosen example (postgres). Postgres connections are very heavy and there is a huge performance penalty for cycling them quickly, and a whole range of silly workarounds for this fact (pgbouncer). Maybe the author meant to say that sessions are brief.
probably not, maybe only for analytical (OLAP) purposes in read-only mode.
for transactional OLTP loads, it is better to use Kafka style durable queues, have agents create a change record to mutate the state, but not the projection itself, which could be recomputed at arbitrary point in time via time-travel mechanism, could be branched out into different versions, etc
It’s good idea to be defensive, design the system in a way that it can “fix” itself.
But for love of god, don’t let an LLM do everything it wants.
This is one of those takes that is so close to understanding the problem, and then drawing an insane conclusion.
The problem is that AI agents and the code they output is untrustworthy, buggy, insecure, and lacking in any of the standards the industry has developed over the last 30 years. The solution to this is "don't use AI agents", not "change the rest of the stack to accommodate garbage".