Elevate Your Code Quality:
A Comprehensive Analysis in 2-3 Weeks!
In the world of technology, the quality of your code can make or break your success. In order to elevate the quality of your code, Finaps has introduced a Quality Assurance service in which our skilled engineers, familiar with your tech, spend 2-3 weeks grading your code.
The mission of Finaps’ Software Engineering domain states: “Use different technologies close towards the clients’ technological strategy, to deliver high-quality solutions that deliver business value.”
Throughout the years we’ve gained experience in achieving this mission. One of our core offerings is even the “CTO as a service”. It shows that we do more than just “build what is being asked”. We are already consulting our clients in the best technological approaches to solve their problems and run their businesses on.
This consultancy skillset can be extracted and offered as a separate service, a “QA consultancy service”.
In a short period of 2-3 weeks, a small group of engineers experienced in the technology used by your organization will do a thorough analysis of your code. They will focus on key quality topics, and use a range of manual and (semi-)automatic analysis techniques to determine the quality. Each of the individual topics will be graded between 1-10. Combined with observations and recommendations, the results are put in a report. The overall score (average of the topics) will be used in an anonymized benchmark. This shows you how you score against the average of all the projects Finaps has evaluated, including the solutions build by Finaps.
The quality attributes we focus on are:
- Architecture: clearly defined use of patterns (system and application level) that are applied consistently.
- Setup & observability: CI/CD pipelines, infrastructure-as-code, static code analysis, testing, observability tools.
- Documentation: is the architectural documentation of good quality and up-to-date (e.g. usage of multiple C4 diagrams), is code commented sufficiently? Are tests defined in a way that they can be used as documentation as well?
- Security: how is authentication and authorization implemented? What is the implemented permission structure and is this well documented & tested?
- Stability & maintainability: how good is the code able to handle changes without breaking? We look at internal decoupling, dependency graphs and the existing of tests to help avoid breaking changes.
How do we do it?
The following activities are part of the assessment. The assessment group has a size of N+1, where N is the number of technologies. The group consists of engineers of varying experience levels.
An in-depth analysis is conducted on the available tests, examining their quality, code coverage, and the types of automated tests present. The evaluation includes various aspects, such as assessing the integration of tests into CI/CD pipelines, availability of load tests to gauge system capacity, and documentation clarity on code usage. Security considerations involve penetration tests and permission-related integration tests. Ensuring stability and maintainability involves scrutiny of unit and integration tests. Additionally, a static analysis of the infrastructure is performed, examining steps in CI/CD pipelines, logging practices, and the utilization of tools to enhance overall quality. The assessment extends to the quality of written documentation, running automated code analysis tools, and conducting evaluations on security through dependency scans and OWASP security scans. Stability and maintainability are further addressed by examining cognitive complexity and adherence to modern coding standards. Architectural patterns at both system and application levels are explored for clarity and consistency throughout the code. Observability is checked by verifying whether the correct elements are logged, and attention is given to the clarity and commentary in the code documentation.
Let our succes stories tell the tale