Build in public · March 15, 2026 · 9 min read

From Idea to MVP in 24 Hours: A LegalTech Case Study for the OHADA Market

How JustiXia went from a Saturday morning conversation to a working AI legal assistant by Sunday night. What we cut, what we kept, and why ship-fast wins in emerging markets.


On a Saturday morning in February, a friend who runs a logistics business in Cotonou asked me a question over coffee: my supplier is breaching our contract — what are my options under OHADA law, and how fast can I get an answer that’s not just a Google search?

I didn’t have a good answer. Neither did Google. The reality for most small businesses across the OHADA zone — the seventeen Francophone African countries that share a unified business law framework — is that legal advice is either expensive, slow, or absent. The law is public. The interpretation is gated.

By Sunday night, JustiXia had a working v0. By Monday it had its first real users. This is the story of how, what we cut, and why I now believe ship-fast is non-negotiable in emerging markets.

Saturday 09:00 — The scope question

The temptation when building anything legal is to scope it as “a lawyer in your pocket.” That product takes years and an enormous compliance budget. Not the right move for a Saturday.

We rescoped to the smallest possible useful thing: a chatbot that takes a question in plain language, finds the relevant OHADA provision, and explains it in everyday French with an explicit “this is informational, not legal advice” disclaimer.

That’s it. No case management. No document drafting. No lawyer marketplace. No payment. One thing.

The rule I keep coming back to: the smallest version of the product that creates a real moment of value. Everything else is preference, not product.

Saturday 11:00 — Architecture in 90 minutes

The stack went together quickly because most of it was off-the-shelf:

Two architectural decisions that mattered: (1) we filtered deterministically by Act before retrieval, because asking the embedding model to disambiguate “commercial law” from “securities law” was a non-starter on the first pass; (2) we forced citations in the system prompt, so every answer ended with a reference to the specific OHADA article. Without that, the product was useless. With it, users could verify and trust.

Saturday 14:00 — The data work nobody tells you about

90% of the build was not engineering. It was the boring work of getting clean OHADA Acts from official sources, parsing them into structured chunks, and tagging each chunk with its Act, title, chapter, and article number.

The official source for OHADA texts is ohada.org and the Journal Officiel. The PDFs are not friendly. The HTML is worse. We wrote a scraper, hand-corrected a few hundred chunk boundaries, and ended up with a clean ~12 MB corpus. This took five hours and felt like nothing was happening. It was, in retrospect, the part of the project that determined the quality of every answer the product would ever give.

Saturday 19:00 — First real test

We sent the link to four small business owners in our network. Three real questions came back in under an hour:

  1. “My distributor is delivering 30% less than what we agreed. How long do I have to file?”
  2. “Can I refuse a payment I received by SGI without a clear motive?”
  3. “If I want to liquidate my SARL, what’s the cheapest legal path?”

Two answers were good. One was hallucinated — the model cited an article that didn’t exist. That was the moment the “ship fast” ethos almost became “ship dangerous.”

Saturday 22:00 — The hallucination fix

We added a verification step: before returning an answer, the system extracts the cited article number and checks it against the actual corpus. If the article doesn’t exist verbatim in our ingested data, we strip the citation and return the answer with a warning that no specific article could be verified.

This is the architectural principle that matters most for any AI legal product, and most teams skip it: the model is not allowed to cite anything that didn’t come from the knowledge base, and we verify that mechanically, not via prompt engineering. Prompts can’t guarantee. Code can.

Sunday morning — What we cut

The list of things we deliberately did not build:

The hardest cut was the lawyer escalation, because business model people kept telling me “that’s where the money is.” Maybe. But you can’t learn what users want from a product that doesn’t exist yet, and adding marketplace mechanics on day one would have moved the launch from this weekend to Q3.

Sunday evening — Live

Domain: justixia.xyz. Twitter post. Two WhatsApp groups. That’s the launch.

First-week metrics: 55 users, 230 questions asked, average session length around 4 minutes. Three quotes I screenshotted and pinned to my desk:

“C’est la première fois qu’une réponse juridique en français parle ma langue.”

“J’ai vérifié l’article cité, c’est exact. Bravo.”

“Quand est-ce qu’on a la version pour le code du travail?”

That last one is the next sprint.

Why ship-fast is non-negotiable in emerging markets

In well-served markets, you can take six months to build something polished because someone else has already taught the user that the category exists. In the OHADA zone, in 2026, “AI legal assistant in French covering OHADA texts” is not a category. It is a thing nobody has built. The user’s mental model is empty.

That changes the math:

Six months of building a polished product in private would have produced a worse product than 24 hours of shipping followed by six months of learning in public. Not by a little — by a lot.

What I’d do differently

One thing.

I’d build the verification layer first, before the first answer the system ever returned to a real user. The Saturday-night hallucination scared me, and it should have. In a product that touches anything legal, financial, or medical, the non-negotiable invariant is “the system never cites something that doesn’t exist.” That should be the foundation, not the patch.

Everything else — the cuts, the speed, the public launch — I’d do the same way again.