Interesting project! The single-writer limitation is a real pain point for multi-agent systems.
Worth mentioning ArcadeDB (https://arcadedb.com) — it's an open-source multi-model database (Apache 2.0) that supports concurrent writes natively, with graph (OpenCypher/Gremlin), document, key-value, and time-series models in one engine. No need to fork or maintain a separate project.
It also speaks the Neo4j Bolt protocol, so existing tooling works out of the box. Could be a good fit for agent memory use cases like this.
This resonates strongly. We've been working on exactly this problem with ArcadeDB — a multi-model database that natively supports graphs, documents, key-value, time-series, and vector search in a single engine. (https://arcadedb.com)
The insight about relationships growing faster than nodes is spot on, and it's why we think the graph model is the natural fit for context layers. But in practice, you also need documents, vectors, and sometimes time-series data alongside the graph. Forcing everything into a single model (or stitching together multiple databases) creates friction that kills agent workflows.
On the GQL/Cypher vs SQL point — agreed on token efficiency. We support both SQL (extended with graph capabilities) and Cypher-style syntax, and the difference in prompt size for traversal queries is dramatic. An N-hop relationship query that takes 5+ lines of SQL JOINs is a single readable line in a graph query language. For LLM-generated queries, that's not just an aesthetic win — it directly reduces error rates and token costs.
Re: GraphRAG — we've seen the same convergence. Vector similarity to find the right neighborhood, then graph traversal for structured context. Having both in one engine (ArcadeDB supports vector indexing natively) means you avoid the API orchestration overhead you mention. One query, one database, full context.
The training gap for graph query languages is real but closing fast. As more agent frameworks adopt graph-based context, the flywheel will kick in.
Congrats to the SurrealDB team! Shipping 3.0 is a serious milestone.
This is also a broader validation moment for the multi-model database space. In a market historically dominated by specialized, single-purpose systems (a separate DB for graphs, another for documents, another for search), it's meaningful that multiple independent projects — SurrealDB, ArcadeDB, and others — are converging on the same thesis: one database, many models. That kind of convergence signals the idea has real legs, not just as an engineering curiosity but as something the market is starting to demand.
If you're evaluating options in this space, worth also looking at ArcadeDB (https://arcadedb.com, Apache 2.0). It covers the same models — graph, document, key/value, time-series, full-text search, vector embeddings — but differs in a few practical ways:
- Query language: ArcadeDB speaks SQL, Cypher (OpenCypher-compliant with TCK testing), Gremlin, GraphQL, and MongoDB QL out of the box, so existing tooling tends to work without migration. The 26.2.1 release also added the Neo4j Bolt wire protocol, so standard Neo4j drivers connect directly.
- TimeSeries model is coming next week already compatible with the time series landscape, highly optimized
- License: Apache 2.0 with an explicit commitment to never change it. SurrealDB 3.0 ships under BSL 1.1, which converts to Apache 2.0 in 2030
- Runtime: Java 21, embeddable as a library or client-server, runs on Linux/macOS/Windows (x86_64 and ARM64).
Not saying one is better for all use cases — both are interesting takes on the multi-model problem. If BSL or SurrealQL lock-in are considerations for your team, ArcadeDB is in the same conversation.
Disclosure: I'm the founder of ArcadeDB and of OrientDB (now part of SAP - one of the DBMS SurrealDB was inspired by)
100% agree. Nobody is really interested on having ArangoDB on the cloud as a service. I guess >99% of the users are not paying and the company is running out of money (sales guys cost a lot!). I think this is a suicide for the product. Clients will remain, also because the switch is expensive. Their proprietary AQL is not easy to convert into SQL, Cypher or Gremlin....
It's hard to make OSS sustainable without millions of $ and VCs trying to turn that OSS tech in a huge business. With OrientDB we got lucky, not it's the past... Now I'm experimenting with a different approach of redistributing GitHub Sponsorships to the developers that actively work to the project:
After almost 18 months it's still far from being sustainable. Pure OSS is one of the hardest field to make some money because of the average developer: they just take without giving anything back in terms of work (contributing) or money.
What about https://arcadedb.com ? Open Source, Apache 2, Free for any usage. It supports SQL but also Cypher and Gremlin (and something of MongoDB and Redis query languages)
I agree that a pure ODBMS makes no much sense today, but OrientDB is a Multi-Model where the Object Model is one of the supported models. You can mix objects, graphs, schema-less documents and much more + using SQL as the query language. Boom!
Especially with OLAP queries.
reply