Its a nice thought & the outcome as a product. Why would an organization pay for such product, as they can very well build a RAG these days tuning to their business needs. I see that Captain API does everything in one shot without rebuilding the RAG, but why would an organization to pay for this solution as this entire chain of activity can be automated and run at non business hours as a batch with the fraction of a cost. What's the delta efficiency that captain would bring it to the table , have you done any benchmarks, if its negligible, i see no reason for any organization to use the captain
Just a note on the website, I thought at first my browser had been hijacked by a shipping or travel agent. The first impression is how AI has improved ship tracking, so you can now track ships with 98% accuracy, with little to no hint this is AI infrastructure until you scroll down.
If you know what Captain is, this is not an issue. I closed the browser tab at first, thinking "what the hell is this, I don't give a damn about shipping forecasts"
Good looking! I didn't get to watch the video or look at docs in depth, but do the results trace back to the location of the answers in a document? Let's say it finds an answer in a PDF, and I'd like to know where in that PDF the citation is. Is that possible or intended?
Having tried this a bit I do really like the single api call for all of it.
I also appreciate transparent pricing but I am not 100% sure the sense of scale of costs. It could be helpful to give some ballparks on things for each of the plans. I'm not sure exactly what i could get out of a plan. My guess, trying hard to figure it out, was if i had about 1,000 pages of new/updated content per month, I would pay $295/month for unlimited queries on top of it. Is that roughly correct?
Yes, we don't charge for queries. For $295, you're able to index up to 1000 pages of new content per month into a fully queryable pipeline.
Advanced and Basic do play a difference though. Advanced is for complex graphics or charts in the documents submitted. Basic is sufficient for most document workloads.
This is cool, like qmd as a service with real-time integrations where it matters?
How do you handle more structured data like csv/xlsx/json? Would be cool if it were possible to auto-process links to markdown (e.g. youtube, podcast, arbitrary websites, etc) a la https://github.com/steipete/summarize (which can pull full text in addition to summarizing).
Thanks, we're just starting to optimize more for the semi-structured data. So far, we've been parsing tables into Markdown and running them through the contextualized embedding model with no overlap, taking advantage of how it strings together chunks. This isn't great for big files so we're exploring agentic exploration (slow but good for more structured numerical data) and automated graph creation (promising for more relational data).
Love the auto-process markdown idea, we'll add it to our roadmap :D
Congrats on the launch. Very quick feedback on the site, since I usually try to check out the blog section, https://www.runcaptain.com/blog layout is broken in mobile, I tried brave/chrome/Firefox.
This is an interesting product, thanks for sharing. Can you elaborate on some of your competitors in this landscape and what you might do differently compared to each one?
Thanks! The largest alternative to Captain is folks trying to build file search themselves. As mentioned in the post, it is a lot to manage.
The most similar product I've seen is Vertex File Search. They're hosted inside of GCP which can fit nicely into existing cloud deployments. Captain indexes from more sources (like R2 for example) and anecdotally provides faster indexing.
Just some unfiltered feedback after checking out the website: from what I understand this is an SaaS only? So basically I’m asked to upload ALL company docs to a company that existed for basically a minute with some questionable SOC2 report. Soc2 is basically dead as a security artefact and the data asked to upload is sensitive by nature. I don’t see that working.
The problem with these kind of tools now is that Codex is so good you can basically build something which is good for 99% of cases in a single day, and it's free...
Look at Tobi vibe-coding QMD, he's not a full-time engineer and vibed that up and now it's used as the defacto RAG engine for OpenClaw.
Yeah QMD is quite impressive! The main difference between us and them is the scale folks would be looking at indexing. The serverless ingestion engine I described in the post is optimized for processing large batch jobs with high concurrency. We depend on a lot of cloud compute for this which isn't something QMD's local-first environment is optimized for. That said, it's a great option for OpenClaw!
For larger enterprises that require governance and additional compliance, we've been relying on trusted partners to help establish a connection to Captain
Interesting to see still solutions being developed for RAG. We developed a solution similar to yours: Automatic indexing from GDrive, SharePoint etc. and then advanced hierarchical chunking, context header based markdown conversion etc... All the tricks that were published last year while RAG was still the "new" kid in town. We finally open sourced everything as the competition from the big players (Notion AI, Google etc.) was daunting. If anyone is interested, this blog post about all the techniques we tried and what actually works is still relevant and up2date: https://bytevagabond.com/post/how-to-build-enterprise-ai-rag...
24 comments:
Its a nice thought & the outcome as a product. Why would an organization pay for such product, as they can very well build a RAG these days tuning to their business needs. I see that Captain API does everything in one shot without rebuilding the RAG, but why would an organization to pay for this solution as this entire chain of activity can be automated and run at non business hours as a batch with the fraction of a cost. What's the delta efficiency that captain would bring it to the table , have you done any benchmarks, if its negligible, i see no reason for any organization to use the captain
Just a note on the website, I thought at first my browser had been hijacked by a shipping or travel agent. The first impression is how AI has improved ship tracking, so you can now track ships with 98% accuracy, with little to no hint this is AI infrastructure until you scroll down.
If you know what Captain is, this is not an issue. I closed the browser tab at first, thinking "what the hell is this, I don't give a damn about shipping forecasts"
Great name - Captain Hook recently hit public domain - would be a sick logo! (Disney stuff like red jacket still copywrited)
No way, that's awesome!
Good looking! I didn't get to watch the video or look at docs in depth, but do the results trace back to the location of the answers in a document? Let's say it finds an answer in a PDF, and I'd like to know where in that PDF the citation is. Is that possible or intended?
Great question, we have deterministic page # citations for PDF results and exact bounding box citations coming very soon.
If you want to check out the Query API response example, here's a link: https://docs.runcaptain.com/api-reference/query/collection-v...
Having tried this a bit I do really like the single api call for all of it.
I also appreciate transparent pricing but I am not 100% sure the sense of scale of costs. It could be helpful to give some ballparks on things for each of the plans. I'm not sure exactly what i could get out of a plan. My guess, trying hard to figure it out, was if i had about 1,000 pages of new/updated content per month, I would pay $295/month for unlimited queries on top of it. Is that roughly correct?
Yes, we don't charge for queries. For $295, you're able to index up to 1000 pages of new content per month into a fully queryable pipeline.
Advanced and Basic do play a difference though. Advanced is for complex graphics or charts in the documents submitted. Basic is sufficient for most document workloads.
This is cool, like qmd as a service with real-time integrations where it matters?
How do you handle more structured data like csv/xlsx/json? Would be cool if it were possible to auto-process links to markdown (e.g. youtube, podcast, arbitrary websites, etc) a la https://github.com/steipete/summarize (which can pull full text in addition to summarizing).
Thanks, we're just starting to optimize more for the semi-structured data. So far, we've been parsing tables into Markdown and running them through the contextualized embedding model with no overlap, taking advantage of how it strings together chunks. This isn't great for big files so we're exploring agentic exploration (slow but good for more structured numerical data) and automated graph creation (promising for more relational data).
Love the auto-process markdown idea, we'll add it to our roadmap :D
Congrats on the launch. Very quick feedback on the site, since I usually try to check out the blog section, https://www.runcaptain.com/blog layout is broken in mobile, I tried brave/chrome/Firefox.
This is an interesting product, thanks for sharing. Can you elaborate on some of your competitors in this landscape and what you might do differently compared to each one?
Thanks! The largest alternative to Captain is folks trying to build file search themselves. As mentioned in the post, it is a lot to manage.
The most similar product I've seen is Vertex File Search. They're hosted inside of GCP which can fit nicely into existing cloud deployments. Captain indexes from more sources (like R2 for example) and anecdotally provides faster indexing.
What about OpenSearch, Onyx, Glean Search, Kore.AI, Sana Labs?
Just some unfiltered feedback after checking out the website: from what I understand this is an SaaS only? So basically I’m asked to upload ALL company docs to a company that existed for basically a minute with some questionable SOC2 report. Soc2 is basically dead as a security artefact and the data asked to upload is sensitive by nature. I don’t see that working.
> Soc2 is basically dead as a security artefact
can you expand on that?
> spotty RAG
:O
The problem with these kind of tools now is that Codex is so good you can basically build something which is good for 99% of cases in a single day, and it's free...
Look at Tobi vibe-coding QMD, he's not a full-time engineer and vibed that up and now it's used as the defacto RAG engine for OpenClaw.
Yeah QMD is quite impressive! The main difference between us and them is the scale folks would be looking at indexing. The serverless ingestion engine I described in the post is optimized for processing large batch jobs with high concurrency. We depend on a lot of cloud compute for this which isn't something QMD's local-first environment is optimized for. That said, it's a great option for OpenClaw!
Funny you say that.
I spent the last two days building this exact thing for our internal use.
Managed to get a full RAG pipeline integrated and running with all of our company documents in less than two days work.
Chunking, embedding and querying, connected to S3 and Google Drive, and running on our own hardware (and scaling on AWS too if needed).
Are you writing the integrations listed there, or is are you using something that manages the data connections?
We've built these integrations ourselves.
For larger enterprises that require governance and additional compliance, we've been relying on trusted partners to help establish a connection to Captain
Interesting to see still solutions being developed for RAG. We developed a solution similar to yours: Automatic indexing from GDrive, SharePoint etc. and then advanced hierarchical chunking, context header based markdown conversion etc... All the tricks that were published last year while RAG was still the "new" kid in town. We finally open sourced everything as the competition from the big players (Notion AI, Google etc.) was daunting. If anyone is interested, this blog post about all the techniques we tried and what actually works is still relevant and up2date: https://bytevagabond.com/post/how-to-build-enterprise-ai-rag...
Thank you so much for this, started reading it a few min ago and already learnt quite a lot!
I like how clean and compressed the info is