On a Saturday morning in February, a friend who runs a logistics business in Cotonou asked me a question over coffee: my supplier is breaching our contract — what are my options under OHADA law, and how fast can I get an answer that’s not just a Google search?
I didn’t have a good answer. Neither did Google. The reality for most small businesses across the OHADA zone — the seventeen Francophone African countries that share a unified business law framework — is that legal advice is either expensive, slow, or absent. The law is public. The interpretation is gated.
By Sunday night, JustiXia had a working v0. By Monday it had its first real users. This is the story of how, what we cut, and why I now believe ship-fast is non-negotiable in emerging markets.
Saturday 09:00 — The scope question
The temptation when building anything legal is to scope it as “a lawyer in your pocket.” That product takes years and an enormous compliance budget. Not the right move for a Saturday.
We rescoped to the smallest possible useful thing: a chatbot that takes a question in plain language, finds the relevant OHADA provision, and explains it in everyday French with an explicit “this is informational, not legal advice” disclaimer.
That’s it. No case management. No document drafting. No lawyer marketplace. No payment. One thing.
The rule I keep coming back to: the smallest version of the product that creates a real moment of value. Everything else is preference, not product.
Saturday 11:00 — Architecture in 90 minutes
The stack went together quickly because most of it was off-the-shelf:
- Front-end: a single Next.js page. Chat interface. No accounts, no email gate. Type a question, get an answer.
- LLM: Claude Sonnet 4 via the Anthropic API. French is a first-class citizen, which matters here more than most people think.
- Knowledge base: the OHADA Uniform Acts (commercial law, securities, simplified procedures, etc.) ingested as raw text and chunked into sections. Stored in a Postgres pgvector instance.
- Retrieval: semantic search over the chunks, with a deterministic filter to keep only chunks from the act the user’s question targeted.
- Hosting: Vercel. Free tier. Domain bought for €12.
Two architectural decisions that mattered: (1) we filtered deterministically by Act before retrieval, because asking the embedding model to disambiguate “commercial law” from “securities law” was a non-starter on the first pass; (2) we forced citations in the system prompt, so every answer ended with a reference to the specific OHADA article. Without that, the product was useless. With it, users could verify and trust.
Saturday 14:00 — The data work nobody tells you about
90% of the build was not engineering. It was the boring work of getting clean OHADA Acts from official sources, parsing them into structured chunks, and tagging each chunk with its Act, title, chapter, and article number.
The official source for OHADA texts is ohada.org and the Journal Officiel. The PDFs are not friendly. The HTML is worse. We wrote a scraper, hand-corrected a few hundred chunk boundaries, and ended up with a clean ~12 MB corpus. This took five hours and felt like nothing was happening. It was, in retrospect, the part of the project that determined the quality of every answer the product would ever give.
Saturday 19:00 — First real test
We sent the link to four small business owners in our network. Three real questions came back in under an hour:
- “My distributor is delivering 30% less than what we agreed. How long do I have to file?”
- “Can I refuse a payment I received by SGI without a clear motive?”
- “If I want to liquidate my SARL, what’s the cheapest legal path?”
Two answers were good. One was hallucinated — the model cited an article that didn’t exist. That was the moment the “ship fast” ethos almost became “ship dangerous.”
Saturday 22:00 — The hallucination fix
We added a verification step: before returning an answer, the system extracts the cited article number and checks it against the actual corpus. If the article doesn’t exist verbatim in our ingested data, we strip the citation and return the answer with a warning that no specific article could be verified.
This is the architectural principle that matters most for any AI legal product, and most teams skip it: the model is not allowed to cite anything that didn’t come from the knowledge base, and we verify that mechanically, not via prompt engineering. Prompts can’t guarantee. Code can.
Sunday morning — What we cut
The list of things we deliberately did not build:
- User accounts. Frictionless first.
- Conversation history. Stateless. If you want to come back, paste your question again.
- Multi-language. French only on day one. English came later.
- Mobile app. Web works on every phone in the OHADA zone.
- Lawyer escalation. The temptation was to monetise via referrals to local firms. We knew this would slow us down by weeks of partnership conversations. Cut.
- Payments. Free for now. Pricing comes after we’ve learned what users actually pay for.
The hardest cut was the lawyer escalation, because business model people kept telling me “that’s where the money is.” Maybe. But you can’t learn what users want from a product that doesn’t exist yet, and adding marketplace mechanics on day one would have moved the launch from this weekend to Q3.
Sunday evening — Live
Domain: justixia.xyz. Twitter post. Two WhatsApp groups. That’s the launch.
First-week metrics: 55 users, 230 questions asked, average session length around 4 minutes. Three quotes I screenshotted and pinned to my desk:
“C’est la première fois qu’une réponse juridique en français parle ma langue.”
“J’ai vérifié l’article cité, c’est exact. Bravo.”
“Quand est-ce qu’on a la version pour le code du travail?”
That last one is the next sprint.
Why ship-fast is non-negotiable in emerging markets
In well-served markets, you can take six months to build something polished because someone else has already taught the user that the category exists. In the OHADA zone, in 2026, “AI legal assistant in French covering OHADA texts” is not a category. It is a thing nobody has built. The user’s mental model is empty.
That changes the math:
- Discovery is slow. Users don’t come to you. You have to put the product in front of them and watch.
- Feedback is gold. Every question asked teaches you what users actually need, not what you imagined.
- Compounding starts at v0. Each conversation improves the corpus, the prompts, and your understanding of the market. The earlier you start, the longer that compounding has to run.
Six months of building a polished product in private would have produced a worse product than 24 hours of shipping followed by six months of learning in public. Not by a little — by a lot.
What I’d do differently
One thing.
I’d build the verification layer first, before the first answer the system ever returned to a real user. The Saturday-night hallucination scared me, and it should have. In a product that touches anything legal, financial, or medical, the non-negotiable invariant is “the system never cites something that doesn’t exist.” That should be the foundation, not the patch.
Everything else — the cuts, the speed, the public launch — I’d do the same way again.