Free cookie consent management tool by TermsFeed

We made an AI powered platform - this is what we learnt

20 August 2025

We made an AI powered platform - this is what we learnt

When people hear “AI platform” (and I hear it a lot), it’s easy to imagine a simple recipe: feed some data into a large language model, sprinkle a bit of prompt magic, and let the system do the work. The reality is very different.

AI on its own is not a solution. Without strong data pipelines, safeguards, and thoughtful product design, what you get is something that looks clever in isolation but falls apart the moment people try to use it.

And here’s the part almost no one talks about: very few people stop to think about how their AI system will fail. But failure is inevitable, whether through bad data, inconsistent inputs, compliance gaps, or a mismatch with how people actually work. If you don’t anticipate those points of failure early, your platform won’t just fail quietly, it will fail publicly and lose trust.

When we built LoveMeTender, an AI SaaS tool that evaluates tenders in minutes instead of days, we learnt first-hand that the hardest problems aren’t the models themselves. They’re the engineering, guardrails, and product design around the models.

Here are some of the hurdles we faced, and what they taught us.

1. Document Processing and Privacy

The Challenge:

At first, we tried using OpenAI’s upload and indexing pipeline. Technically it worked, but it fell apart in practice. Sensitive public-sector tenders couldn’t legally sit on third-party servers, and the repeated upload/scrub/delete loop wasted time and money at scale.

The Lesson:

AI doesn’t solve compliance for you - you have to design for it. We built a custom pipeline (PDF → HTML → Markdown) that:

  • Kept all data under our control.

  • Normalised encoding to avoid downstream corruption.

  • Optimised documents to minimise tokens and costs.

The model itself wasn’t the hard part, it was the data prep, security, and efficiency engineering that made the whole system viable.

2. Granularity Beats Simplicity

The Challenge:

An early version spat out a single “overall score” per tender. Neat, but useless. Real evaluators need question-by-question scoring, weighting, and justification. Without that, the system looked like a toy.

The Lesson:

The real risk in AI projects is oversimplification. To build trust, your outputs must mirror the way people actually work. By breaking evaluations down into granular tasks, we:

  • Anchored the model to smaller, less error-prone prompts.

  • Reduced hallucinations.

  • Produced transparent, auditable outputs users could believe.

Without that foresight, we’d have ended up with an “AI demo”, not a usable product.

3. Weighting Is Messy, Humans Are Messier

The Challenge:

We assumed weights would be clear: 10%, 20%, add to 100%. In reality, some tables added up to 180%, others had gaps, others none at all. Left unchecked, the AI would happily churn out skewed, unfair results.

The Lesson:

If you don’t anticipate human inconsistency, your AI will fail silently. We built a normalisation algorithm that respected existing weights, filled gaps fairly, and scaled down bloated totals.

This wasn’t “sexy AI”, it was domain logic and maths. But without it, the whole platform would have lost credibility.

4. Industry Standards Matter (a Lot)

The Challenge:

At first, we lumped everything into one evaluation. But in real procurement, tenders are divided into “Lots”. Suppliers often only bid for one or two Lots. Our early approach punished them for ignoring irrelevant questions, producing unfair scores.

The Lesson:

Ignore industry standards, and your AI will fail the moment it meets the real world. Adding native support for Lots meant suppliers were judged fairly, evaluators got cleaner reports, and the system aligned with how procurement actually works

This wasn’t just a technical fix, it was recognising that AI must adapt to reality, not the other way around.

The Bigger Picture

What we learnt building LoveMeTender is this: AI isn’t the hard part.

The hard part is:

  • Privacy, compliance, and efficiency.

  • Designing evaluation logic that matches user expectations.

  • Anticipating messy human inputs and handling them gracefully.

  • Aligning with established industry practices instead of fighting them.

Anyone can “plug AI into a product” and get something that looks clever in a demo. But without foreseeing, and engineering around, these issues, the platform will collapse the moment it meets the real world.

Final Thought

AI isn’t magic. It’s engineering.

Building LoveMeTender proved that the success of an AI platform depends less on the model itself and more on the scaffolding around it: preprocessing pipelines, weighting logic, UX design, compliance safeguards, and domain expertise.

That’s the difference between an AI experiment and an AI product.

Back to News & Insights →

Tell us about your business and the systems you depend on

We will give you a straight answer on how we can help. No pitch. No template proposal.