Where I share about my journey in tech and life.
This article is writen manually and uses ai in order to check the spelling.
Yesterday we had our first reviews for Subalta. We shipped an open beta one month ago and it did not go as planned.
For a bit of context, I started working on the project a bit more than a year ago. I joined as founding and lead software engineer.
The goal of the company is to create easy access to company funding. Our project intends to solve 3 problems:
During months 0 to 10, we implemented features horizontally. Meaning we only had the base of the centralization, yet we decided to move on to the redaction, and before that was even done, one of us started working on the reporting feature.
Yes, let’s say we fucked up.
In my head, I had the voice of my old boss saying, "Jeremy, watch out not to spread your work," the wall was coming fast.
However, at the beginning of May, a pretty important deadline led us to rethink everything.
At the beginning of May, we were selected for the Belgian Startup Award, which would occur on the 5th of June. We thought this would be an interesting moment to ship the beta, vibing on the noise that the BSA would produce. So we strongly restricted the scope for the deadline and focused on the centralization, access, and search of funding. It is important to note the 3 features.
We developed a pretty well-architected scraper. Meaning it can crawl any website and determine whether a page is a funding opportunity or not and, if so, generate structured data from the funding. Actually, there was not a lot of work to do on that side.
We had the whole UI redefined, from an old techy interface to a new SaaS UI with nice colors, etc. Even for me, I thought it looked cool. It’s not the best, but it works.
In the meantime, I had the entire search engine redesigned. I worked on a KNN/AI-driven filter where users can write any input, and the engine will output the most relevant funding opportunities from our database. This feature is really useful—we even used it to get funding to develop a multi-agent LLM-based redaction engine. So I guess our work is not that bad, is it?
Well, I would say our work is great! However, it does not solve the problem we are targeting...
During the many interviews my cofounder conducted, the big problem was funding access—people don’t know what exists. Determining eligibility is not a big pain for them; it’s just knowing what they can access. With that in mind, the filter feature we worked on is completely useless. You cannot solve a black box with another one.
The following is only based on what I think and will be proven, or not, with next week’s interviews.
I believe people were expecting a list of funding opportunities they are eligible for. One week before shipping, we worked on that—I had had that feeling since the beginning of Month 10.
So I quickly created a page that statically filters the funding based on the company’s location, size, and type.
It means that it would not be relevant, but it would be accurate.
Meaning for my company—AI integration and software development—the platform would provide me a funding opportunity about innovation in health. (One could argue that I could, as a software company, want to apply in innovative health, but the point is there.)
In our heads, we were expecting people to know what project they want to fund, but in the end, when they don’t know what funding exists, can we really expect them to provide an input?
In the end, there were just a few points where we fucked up:
I believe we are not the first to fail our launch. We made errors, but we are learning. Now we must ensure it does not happen again in the future!