November 30, 2023
Time to read

Tying it all together

Graham Davis
Managing Director
IT Services

In my close to 30 years in IT, and Quality Assurance in particular, I’ve seen and heard a lot of things.  Not quite attack ships off the shoulder of Orion or C-Beams glittering in the dark near the Tannhauser gate, but enough to take on many learnings and shape how I’m doing things now.

They say the most common cause of software failure is poor quality requirements and that’s true. Failure to concisely identify what’s required by a system both functionally and non-functionally, whether you are doing waterfall or agile or anything in between, leads to ambiguities which almost always result in failure.

We can’t blame everything on poor requirements though and just hope our Business Analysts ‘do better’. They are often a victim of the workplace processes and environment and must work with what they have. Even if said requirements are unambiguous, correct, consistent, complete, testable, and feasible, it does not mean that you’ll end up with a fully working system. Other very common causes of failure include anything to do with data, environments, manual processes, development errors, hardware, middleware, user error and, as we appear to be seeing with the new Rozelle Interchange in Sydney, a fundamental misunderstanding of the opportunity or problem statement and the assumptions that go with them.

I’ve seen testing in production resulting in an executive losing valuable insurance cover. I’ve seen release management reinstall a past version of code into test leading to the loss of weeks of effort. I’ve seen requirements signed off by management clearly not having read them and I’ve seen processes so lax that an untested fix could be pushed into production preventing system access globally for twelve hours.

All being equal, good quality practices and processes will usually prevent or detect the most critical issues, but to reduce risk efficiently, systematically, and effectively a holistic approach is required across everything.

ITIL 4 is much more than service desk optimisation. It covers General Management, Service Management and Technical Management Practices. Ranging from how you do continual improvement, knowledge management, project management though to change control, incident management, release management and on to deployment, infrastructure and platform management and software development. Getting these practices right will make a huge difference in how your organisation operates and the culture within which it builds or acquires software.

Quality Assurance and testing processes are paramount in delivering software that is fit for purpose. Make sure your processes are aligned with methodology, you understand the objective of each stage of testing, what you are doing is measurable and of course, tied back to requirements so your results are both meaningful and objective rather than creating false confidence based on numbers of test cases executed.

How reliant is your company on its data? Sales, inventory, billing, financial, customer, supplier, tax, regulatory? Is there a proper data strategy in place? Is there a single source of truth? Is it secure? How do you access, store, and retrieve it? Are you, or would you know if you were in breach of GDPR or the Australian Privacy Act of 1988? How do you use it? Could you use it better? How much does it cost you and could you be more efficient?

Finally, how safe are you? Are you sure no one, accidentally or maliciously can’t come in and jeopardise all your hard work? Most organisational security threats are not technology related so you can have the best tech in the world but if your employees are clicking on the wrong links or answering questions on social media about their first pet name it may all be for nothing.

So, you must look at all the links in your chain, not just a few. Get your IT practices in order and do everything the best way you can. Implement best practice quality assurance and don’t let vendors dictate acceptance criteria. Invest in and understand your data both quantitatively and qualitatively, it’s much more of an asset than you might think. Know your security threats and don’t think the solution is all tech or penetration testing.

Remember the big picture, take a step back and if something doesn’t look right, it probably isn’t. And if you have read this far, I promise my next blog will not feature my photo; real, manipulated or otherwise!

Related Articles

May 13, 2024

UnicornX's Expansion into the Philippines – What This Means for Australian Companies

At UnicornX, we're driven by a singular purpose: to deliver innovative IT solutions that empower our clients to achieve their goals. With the launch of UnicornX Business Solutions Inc. in the Philippines, we're expanding our service portfolio and capabilities to better serve Australian companies and our existing clients.

Read more
February 16, 2024

The cost of (not) testing!

The new series airing on Channel 7 this week (14 Feb, 24) “Mr Bates vs the Post Office” sheds light on a scandal that engulfed one of the UK’s most venerable institutions, the GPO (General Post Office), or more commonly known as the Post Office. It was originally established in 1660 and since then has been a mainstay and essential part of communities across the length and breadth of the UK. The implementation of a flawed software system therefore had devastating far reaching consequences.

Read more
January 30, 2024

The Dawn of Web3: Embracing Filecoin for Revolutionary Data Storage

In the digital epoch, the shift to Web3 marks a significant turning point in the way we handle data. The exponential growth in data generation - projected to reach almost 75,000 Zettabytes by 2040 - underscores the urgent need for innovative data storage solutions. This is where Filecoin and Web3 come into play, revolutionising how we store and manage data. This blog explores the potential advantages of transitioning to Web3 and implementing Filecoin for data storage.

Read more

Why hesitate?
Connect with us today

Contact Us