Why Internal Architecture Rating Didn’t Truly Reflect What Was Happening - 10 Reasons
That's very intriguing. Because you're about to hear that the real goal of architecture rating is to gain insight into both the project and system life cycles. It also extends beyond what is happening within the project, such as project completion in the first six months or a year. In one of the Financial Institution, project director of the Digital Project (AI, Block-chain based solution) asked the team to assess/audit the project to see technology alignment with business goals, An internal rating done and the score was 87%. But, when our team did the fast-track architecture rating, the rating value was 40%. This rating was useful for the project head to measure the readiness of the new solution to address three intended business goals. Our rating delivered a confidential score of architecture and evaluation for the project / solution to meet the expected business goals. The rating is based on project use cases/scenarios related to business goals, business processes, system capability, technology maturity, implementation effectiveness, and operational excellence.
It’s clear that complexity of technical implementation is not the measure of architecture excellence.
Why such a huge gap in rating by internal team and ICMG team?
In order to understand that, let me share what we have learnt in the last 20 years.
As you are aware, there are literally hundreds of methodologies, individual architect preferences, frameworks, and then there are business opportunities and technological challenges, all of which add to the overall complexity.
When we first started rating architecture in 2006, we had to deal with "opinions" rather than actual architecture "content." It's not a TV debate, and it's not an open-ended subjective discussion about what's there, what assumptions are made, and so on. The architecture rating is a very quantitative way of separating opinion and insight.
Everyone understands the importance of architecture review and rating, but they believe it will take too much time and be too costly. This is something we've been hearing for the past 20 years or so. Project assessment and project alignment initiatives are frequently person-centric, project-centric, non-standardized, and lack a common structure.
Architecture is practiced in every industry because things are extremely complex, and they have limited time and resources. In other words, architecture is not done in civil engineering, manufacturing, or other industries because they have easier things to do. They also have a limited budget, limited resources, and a tight schedule. They practice architecture because things are complicated and they have limited time and resources. So, why is it that doing architecture in IT projects takes so much time? Why is it consuming so much resources? And why aren't you getting anything in return?
1.Value : So we were trying to figure out how valuable an architecture rating could be. It's a great opportunity for the project Directors and senior management to benchmark their Enterprise & Digital Architecture capability and its applicability in driving digital solutions and customer services. The solution benchmark that eats the competition. Unfortunately, when IT team is involved, they end up evaluating the project around the use of software and hardware components and whether the features are properly implemented or not. Although the Project owner tries to create the value proposition, the development stakeholders tackle their own concerns and issues.
This is further exacerbated by the lack of qualified personnel, the delay in acquisition of equipment and technology, scattered teams around cities and different time zones, budget cuts, etc.
And the list will go on. In this melee, the project owner has no time to focus on the need to create value for clients, users and key stakeholders.
2. Time and Efforts: And how can it be done in a short period of time? For a 12-month project, you don't need a six-month rating and review. And we're talking about genuine insight here; you're looking at something crucial. As a result, most people find it difficult to gain true insight in a short period of time.
3. Expensive: What we saw was that most project owners felt that this exercise was very expensive. Also, you need experienced and senior members of the team with clear understanding of short term issues and long term challenges.
Now, when we started answering these three questions, we saw the real challenges.
4. Framework - One of the biggest challenges is that everyone who uses the word "framework" has their own understanding of it. Now imagine if someone comes up and says, look, this is my understanding of human anatomy, and the next person says, this is my understanding of human anatomy. In COVID time, instead of finding a vaccine, imagine that every doctor and every hospital in the world was sitting and trying to find and define their own understanding of human anatomy. No, they're not doing that. You know why they're not doing that? Because 150 years ago, Mark Grey and his team solved this question for them. They found that every human being has only one anatomy, and these are the 11 organ systems. The real problem we've seen is that projects and teams are struggling to define these frameworks that could capture the anatomy of their project/projects? Also, because they were unaware of the nature of anatomy, there was a dearth of comprehension.
5. Multiple stakeholders - What is the best way to deal with the stakeholders? As a result, everyone is attempting to address a set of stakeholders who are relevant to their project, each with their own set of skills and knowledge. Often, missing out on the stakeholders that are critical during the system lifecycle.
6. Technical Issues - Every project is tailored to address a few technical issues as well as business opportunities. So someone might say, "I'm trying to assess network models," while another might say, "I'm trying to assess middleware architecture, distributed processing," and still others might say, "We're looking at managing devices and gateways." Oh, you see, what I'm saying is that I'm trying to review the process optimization. The data modelling team, network team, application development team, middleware team, UI (User Interface) team, rule development experts, security team, etc. are working in isolation, trying to find solution to their respective challenges.
7. Business Issues - Due to Pandemic, it's important to prioritize your investments and project portfolio. Evaluate technology alignment with business goals, explore business innovation and its impact on new products and services, increase in revenue, cost reduction, compliance requirements to heightened financial control etc
8. Architecture Artifacts - Everyone would establish their own set of priorities. As a result, there is a lack of clarity and completeness, and whatever artifacts were available were relatively limited, and they would not fit into the frameworks, stakeholders, or use cases, whether technical or business.
9. Artifact formats (documents / models): And the worrying thing was that they are all created in different formats. So, even if some of the models (Strategy models, process models, network models, application models, component models, deployment models etc.) are created, they exist in different files, and file types. And in a short span of 3-6 months, all these models are outdated as the cost and time of updating these isolated models are very high. This a classic example of how to create legacy architecture. An architecture which loses its value in less than 4-6 months from the date of creation. The real value of Architecture happens when such models could be used for 4-6 years as the basis of change, upgrade and new opportunities.
10. System life vs Project life – This is another area of big concern. Even though professionals with Architecture skills, are trained in understanding the complexity and changes during system life, often they end up creating models which are limited for just first or second release of the system i.e. project delivery takes precedence over system life issues.
As a result, Architecture turns legacy and this is anyone’s guess that you can only create a legacy system using a legacy architecture models. Our expectations are that Digital System should have life of 6-8 years ( it could be less / more based on certain type of solutions), as a result the architecture artifacts must be created which are independent of implementation details, independent of technology environment. Also, very clearly separate technology (specifications) with technology (implementation).
It's crystal clear why project heads aren't receiving their full worth from architecture evaluations, as internal rating did not truly reflect what was happening. The internal team’s desire to glorify software features was so strong that they didn’t care if the software could deliver the intended value for the business stakeholders.
The Program Directors, with this leading Financial Institution, confided – “At one point in time, I had 12 projects. It was not possible to use these frameworks with different projects; there were too many frameworks. We can't deal with all the technology problems. They didn't have the capacity to handle all the business responsibilities. It took a very long time. I'll put in a lot of effort and get a little out of it”.
So, the Fast-track Architecture rating requires us to address these problems. I'm so glad that in the last two decades we've made major progress in breaking down all the barriers. We could potentially gain value in four to five weeks. We've seen this with some of the biggest corporations and projects around the world.
Our team will rate various architecture models such business strategy models, business process models, system models, technology models, implementation models, operation models. Once completeness and consistency of each of the models (mentioned above) are rated, then we rate inter-relationship between these models in fulfilling business use cases. This helps in identifying the gaps and lapses. The ICMG Rating process uses Enterprise Anatomy models as a basis for evaluation and impact analysis.
ICMG Architecture Rating Scale
1. Doubtful ,Questionable ( 0 - 20 %) 2 Incomplete and Inconsistent (20 - 40 %)
2. Good, Possibility to Improve (40 -60 %)
3. Very Good, Verified (60-80 %)
4. Excellence, Proven & Clearly demonstrated (80 - 100 %)
Are struggling to correctly represent what is happening internally?