Most evaluation processes for software development companies focus on the wrong things. Portfolios, client lists, certifications, technology stacks — these are all visible. They're also largely irrelevant to the question you're actually trying to answer: will this team build the right thing, not just build the thing right?

The distinction matters because software development failure almost never happens at the level of technical execution. It happens at the level of understanding — a misalignment between what was specified and what was meant, between what was built and what the business actually needed.

Evaluating a development company means evaluating their capacity to understand your problem. Not just their capacity to execute against a specification.

What Most Evaluations Get Wrong

The typical procurement process looks like this: issue an RFP, collect proposals, compare pricing, check references, choose the one that looks most credible. This process filters for companies that are good at responding to RFPs. It does not filter for companies that are good at building complex software.

The proposals that arrive quickly — polished, detailed, fixed-price — are a signal worth examining. A complex software problem cannot be accurately scoped from a brief document and a single call. A company that quotes confidently before understanding the problem has pattern-matched your brief to something they've built before and assumed it's similar enough. Sometimes they're right. Often enough that it matters, they're not.

What follows is a more useful framework: a checklist of what to assess, and a set of questions that reveal how a company actually works.

The Evaluation Checklist

Architecture and Technical Thinking

  • Can they articulate what makes your problem architecturally complex — without prompting?
  • Do they ask about scalability and long-term constraints before proposing an approach?
  • Can they explain the trade-offs of their proposed architecture, not just the benefits?
  • Do they have a clear view on when to use off-the-shelf solutions vs. custom code — and can they justify it?
  • Have they built systems that needed to scale or evolve significantly? Can they describe what changed and why?
  • Do they document architecture decisions? Can you see an example?

Process and Communication

  • What does their feedback loop look like? How often will you see working software?
  • Who will you actually be working with day-to-day — the people in the sales conversation, or someone else?
  • How do they handle requirement changes mid-project?
  • What do they do when they disagree with a client's technical direction?
  • How do they identify and communicate technical debt as it accumulates?

Commercial and Accountability

  • Do they offer fixed-price, time-and-materials, or retainer engagements — and can they explain why their model fits your type of problem?
  • What happens if scope expands? Is there a defined process, or is it renegotiated each time?
  • Will you own the codebase entirely at handover? Are there any licensing or dependency risks?
  • What's their approach to automated testing — default, or only when requested?

Team and Capacity

  • Who is building your system — employees, contractors, or subcontractors?
  • Is the team dedicated to your project, or context-switching across multiple clients simultaneously?
  • What happens if a key person leaves during the engagement?
  • Is there a senior engineer reviewing architecture, or is work primarily junior-led?

Questions That Reveal How They Actually Work

Some questions are worth asking directly — not because the answer is binary, but because the quality of the answer tells you what you need to know.

"Tell me about a project that went wrong. What happened, and what did you do?"

A company that answers this honestly — with specifics, without deflecting blame to scope creep or client decisions — understands their own failure modes. A company that can't give you a concrete example is either very new, very lucky, or not being straight with you.

"If you disagreed with a technical direction we wanted to take, what would you do?"

The answer you want: they'd tell you directly, explain the risk, propose an alternative, and ultimately defer to your decision while documenting their objection. The answer that should concern you: "We build what the client asks for." That's not a sign of respect for autonomy — it's a sign that no one is thinking.

"Can you walk me through the architecture of something you've built recently?"

Ask this in a conversation, not a presentation. The ability to explain architecture clearly — in plain language, covering the decisions made and why — is a reliable indicator of genuine understanding vs. execution-only capability.

Red Flags

  • The proposal arrives within 24 hours, with a fixed price, for a complex problem. They've pattern-matched your brief to a previous project. That may be fine. It may not. Either way, they don't actually know yet.
  • Discovery is a formality. If they're not asking difficult questions early — about constraints, failure modes, integration points, who the system serves — they're not planning to answer difficult questions later.
  • They lead with technology before understanding the problem. "We use React, Node, and AWS" before they know what you're building is a sales pitch, not engineering.
  • References are all from completed projects. Ask to speak with a client whose project is still in progress. Completed projects can be polished. Active ones reveal how the company actually behaves under pressure.
  • They can't explain trade-offs. Every technical decision involves trade-offs. If their answers only describe benefits — never costs, risks, or what they decided against — they're selling, not engineering.

The Conversation That Matters Most

Before any commercial discussion, describe your problem to them. Not the requirements — the problem. Tell them what you're trying to accomplish, what's been tried before, what the constraints are, and where the risk sits.

Watch what happens.

A capable team will ask questions you haven't thought of. They'll identify risks in your current thinking. They'll have an opinion about approach before they've seen a specification. They may push back on something you assumed was settled.

That conversation — before any commercial discussion — tells you more than a portfolio review, a technical assessment, or a reference check. It tells you whether they understand software development as a thinking problem or as an execution problem.

The companies that should concern you can usually do execution well. Very few can do the thinking consistently.

This is particularly relevant if you're considering offshore development, where the feedback loop that makes that thinking possible is the first thing to break down. But it applies to any engagement: the gap between what you describe and what you mean is where most software projects fail. The question is whether your development partner is equipped to close it.

If the discovery process feels like a formality — if no one is challenging your assumptions, asking about what happens when things go wrong, or telling you something you didn't already know — you're not being evaluated as a problem. You're being quoted as a contract.

That's worth knowing before you sign anything.

If you're evaluating development partners and want a direct conversation about your problem before anything else — that's exactly how we work at CoolMinds.

Start a conversation →

Armin Marxer writes at zeroclue.dev.