When asked why we decided to build a requirements analysis tool, I immediately thought back to one of our first meetings with engineers:
That was the response of the aerospace engineer sitting across the table from us as we explained what our nascent tool QVtrace could do…
We knew it could do it because we’d been doing it with our partners, other large-scale systems integrators, for months. We were ready to test our mettle with others, to show the world that our verification engine – I tried to call it the V8 engine, but an act of unbridled mutiny quashed the name – was ready to used by domain experts, and not solely by PhD Mathematicians and Physicists like our competitors’ suites. But, “bullshit” was the response.
We decided to go for the direct approach. We suggested they give us their system models and the system requirements, and we’d show them our capabilities. They bit.
It was near Christmas and they said they’d send us the material before year end and we’d reconvene early in the new year.
Flash forward to January – same board room, same engineers – and the engineer started the meeting by apologizing.
They had intended to plant some faults in their systems designs to test our approach; it being Christmas and all, they forgot to doctor the models and so there were no faults to be found.
Except we did find faults. Faults they didn’t know about, and what we didn’t know was that this was a deployed system. (Although to this day we don’t have official confirmation of this.)
The next hour or so was surreal; we watched the engineers argue among themselves about what actually constitutes an error and about where the root of the error actually lied.
When we first began building QVtrace we assumed that systems engineers would use it to design better models, where fewer design errors would drift and be uncovered during late stage testing, way on the wrong end of the development path. What we discovered, first with these engineers, and many times since, is that at least half (half!) of the problems lie with the requirements, not the designs.
We realized that to find errors as early as possible, we should check for errors, uh, as early as possible. This meant before any implementation was even designed. We needed a new tool that would analyze requirements and alert the author of ambiguous, overly complex, and essentially malformed engineering requirements. The concept of QVscribe was born.
Engineers already have an over abundance of tools. So where does a new requirements analysis tool fit in?
QVscribe was above all intended to be a usable tool, whose utility could be grokked in seconds: no unfamiliar UI’s; a task–based, not a feature-based, workflow. The first manifestation would be a sweet utility that would generate value for the user immediately.
Requirements on a large project number in the many tens of thousands. There are many mature tools for managing, tracking, and storing these requirements. But most requirements start and live within MS Word and only later are they transferred to these management tools. In keeping with our approach of finding errors at the point of injection, we decided that instead of building a separate requirements analysis tool, we would integrate directly with Word to have it available to engineers in their current workflow.
We’re convinced that our approach is the right one. Talk to your customers to find their pain points, help them solve their problems by being as unobtrusive as possible, and encapsulate the power of the engine behind a compelling and intuitive interface.
We have just launched a private beta of QVscribe to test our own hypotheses and gather feedback on the efficacy of our feature set and our approach. Give it a try and share your experiences with us.
– Jordan Kyriakidis