Twenty years on, and stakeholders still don’t speak architect…

By Eugene McSheffrey, MEGA International
Twenty years ago this month, the US Congress signed an Act that had far-reaching effects for Enterprise Architecture (EA). The measure subsequently became known as The Clinger-Cohen Act (CCA).

Signed into law on February 10th, 1996, the CCA made it a legal requirement for US Federal agencies to appoint a Chief Information Officer and specified the CIO’s responsibilities to include “developing, maintaining and facilitating the implementation of a sound and integrated IT architecture”.

Although the Act did not specifically mention “enterprise architecture”, the agencies’ response was to adopt Enterprise Architecture frameworks like FEAF, TEAF and DoDAF. Other governments followed the US example and encouraged the use of EA. DoDAF was soon adapted by the UK and NATO to create the defence architecture frameworks MODAF and NAF .

The concept of EA is now well established. There is evidence to suggest that the job of Enterprise Architect is now regarded as the pinnacle of the information technology career path, at least in the US.  Salaries offered for “Chief Enterprise Architect” exceed those for CIO or CTO, and posts with “Enterprise Architect” in the title pay more than those with similar job descriptions. Although the research suggests that this might indicate confusion on the part of recruiters about what EA actually is, Enterprise Architects are hardly likely to argue.

In the twenty years since the CCA, Enterprise Architecture has flourished, but it has also attracted criticism as organisations have often failed to reap the anticipated benefits.

There are dangers in simply mandating the use of an architecture. If an architecture is required in order to demonstrate compliance then management will certainly ensure that one gets created. However, it may simply be viewed as just another non-functional requirement: an overhead rather than an asset. If the emphasis is on deploying the architecture, rather than employing it, then the architectural discourse will tend to focus on the inputs and activities needed to construct the architecture, rather than on outputs and results.
This seems to be at the root of a problem highlighted in a Forbes Tech article “Is Enterprise Architecture Completely Broken?”, namely that EA Frameworks are “self-referential”. “Self-referentiality” suggests a negative kind of self-absorption: a preoccupation with form rather than content. There is often a perception that architecture teams are more concerned with architecture itself than with practical problems. Assuming that everyone concerned agrees that architecture is simply a means to an end, why should this be?

The answer lies in communication. Professionals such as doctors and lawyers effectively use one language among themselves and a different one when talking to clients. It doesn’t cause any confusion because it’s generally clear which one is being used. In contrast, the technical vocabulary of EA employs commonplace words but loads them with very precise meanings. Words that are effectively synonyms in everyday language have subtle distinctions and interrelationships assigned to them. It doesn’t help that different frameworks apply different meanings to the same words, and that many architecture practitioners (and stakeholders) have their own preferred definitions. It has been remarked that Britain and America are divided by a common language . The architecture community has a similar problem.

Attempting to resolve meaning by reference to one or other architecture framework in the course of a conversation with stakeholders risks turning it into a dialogue about the framework itself. It also suggests a privileged status for the language of the architect over the language of the stakeholder. It shouldn’t be surprising if the stakeholder finds this irksome. Architecture is a collaborative activity but ultimately it’s the architect’s job to understand the stakeholder’s world, not the other way round. All of this simply reinforces the “self-referential” stereotype.

What can Enterprise Architects do to avoid this?
A good doctor will normally listen and talk to each patient in a way that is appropriate, so although they have to master a formidable technical vocabulary they don’t let that get in the way of effective communication. The doctor matches the patient’s words to the medical terminology, but it’s mainly an unspoken process. If clarification is needed, it is sought using the language of the patient, not the medical textbook.

Consulting room conversations are about patients’ concerns, symptoms, treatments and outcomes. Architects who take a similar approach with stakeholders are unlikely to be accused of self-referentiality.

So much for negative self-referentiality. Is there a good kind?
The current holder of the FEAC Institute’s Industry Award for “Leadership in Enterprise Transformation” is the European Air Traffic Management Architecture. EATMA is used to coordinate the Single European Sky ATM Research (SESAR) project, a €2.1Bn R&D programme to completely overhaul the air traffic management of European airspace.

Terry Bromwich, Principal Enterprise Architect at National Air Traffic Services, explains some of the reasons for its success:
“It may sound obvious but organisations have limited success unless they take an Enterprise Architecture approach to their Enterprise Architecture. In the case of SESAR EATMA, a great deal of time was invested up front ensuring that the team were clear what was needed from the Architecture, who was going to use it and what data they needed.”

So, to succeed, an organisation should take an EA Approach to EA? That’s so self-referential it’s actually recursive! It’s clear, however, that the focus is on the organisation, not the architecture. If in another twenty years EA has ceased to be accused of self-referentiality (i.e. the bad kind) it will be because EA practitioners have succeeded in demonstrating that Enterprise Architecture isn’t about building better architectures, but better enterprises.

More than one way to skin a framework

written by Dr Graham Bleakley

The words “to skin” in the title of this blog are used in two senses. The first refers to the stripping back of a number of frameworks to their constituent parts and underlying meta models, then finding the common patterns that exist in these frameworks and abstracting that commonality into a generic framework. The second way in which “to skin” is used is when you apply a presentation layer of a specific framework across the generic metamodel. Literally adding a skin to it. This is the process that the UPDM group has been employing over the past 7-8 years in developing the Unified Profile for DoDAF and MODAF and which has now been taken to another level and resulted in the Unified Architecture Framework and Profile (UAF/P).

The scope of the term Unified in the UAF is across the frameworks that contribute to the UAF. It is not intended as a catch-all for all enterprise architecture metamodels or frameworks. The framework that contributed to UAF were MODEM (and by association NAF), DoDAF 2.x, DnDAF security views, Human Views and Systems of Systems Lifecycle Views. This rich combination of frameworks led to a complex and overlapping array of Viewpoints with many common features. It was while reconciling these frameworks at the viewpoint level that it became apparent that a new approach was required to present the various viewpoints.

Taking a leaf from the developers of MODEM the UPDM group refactored all the viewpoints to a grid. The rows define the levels of abstraction and the columns different forms of architectural representation. This resulted in a generic framework and a set of viewpoints with an underlying meta-model that allows all the donor Frameworks to be expressed in their own skins, that also stands as a framework in its own right.

The UAF/P (which is being developed under the governance of the Object Management Group) has two levels of specification. The first is a domain meta-model that specifies the constituent concepts, relationships and viewpoints in an IDEAS Lite format. This provides the means for any Enterprise Architecture tool vendor to support the UAF. The second part of the specification defines a profile for SysML tool vendors to follow. It is expected that Tool Vendors will provide presentation layers or Skins” appropriate to the contributing donor Frameworks, I.e. DoDAF, MODAF, NAF etc. as well as providing a standard version of the UAFP.

The advantage of this approach is that it allows users to work with the base UAF without a skin or any of the connotations of working with an Architectural Framework based upon the defence domain. Commercialising it and making it more accessible to users and developers of Systems of Systems architectures. The other advantage is that it provides a basis for mapping other less formal reference frameworks like Smart Grid or the Industrial Internet Consortium onto a more formal framework that provides many of the core concepts and relationships needed by them to develop the complex Systems of Systems architectures that they work on.

To find out more about the UAF/P, the rationale behind it, what it is, how it was developed and how it is envisaged it will be used, two of the co-chairs of the UPDM group and architects of the UAF/P, Dr Graham Bleakley and Dr Aurelijus Morkevicius will be presenting a paper at IEA 2016 on this subject.

@bleakleyGJ

Monkeys at Typewriters Painting the Forth Bridge

Ian Bailey

Lets imagine there was once a very big EA project that involved lots of people modelling lots of things, in lots of depth, with no particular goals in mind, and in an environment where the things they were modelling were constantly changing. Under these circumstances, you’ve got “Monkeys at typewriters, painting the Forth Bridge” – a term I once heard used to describe enterprise architecture. I think it aptly describes a number of EA projects across various industries. Every bank, telco and defence organisation has had one of these, and most of them happened around the early 2000s.

I’d like to think that no-one does EA that way any more, certainly not at that scale. However, for the last couple of years various blogs and and LinkedIn posts have been mourning (or celebrating) the demise of Enterprise Architecture. There are plenty of these blogs out there, and while they all share a common theme, they don’t often agree on what the cause is. Some think the EA frameworks are bloated and overly complex – which is probably true, but it seems a somewhat churlish explanation for failing EA. Some think it’s all to do with IT taking over the discipline and ignoring the business aspects. Again, I think there’s an element of truth to this, but systems architecture seems to be alive and well, so why would it fail if you just happen to call it “enterprise architecture” ? Some like to blame the tools. Others argue that EA fails when you try to align business and IT, because that’s just not possible. I’d argue it’s difficult, but it’s not impossible.

So, why have so many EA projects failed ?  Well, I’m not sure that many have failed, and a lot of the blogs I’ve been reading seem to emphasise failures so they can sell their own particular brand of snake-oil EA that will definitely not fail. Probably.

Anyways, failing or not, dead or alive, I can’t help thinking the ones that failed did so because of the people involved. Not the individuals, but the combination of their personalities and approaches. There are a few organisational antipatterns that sometimes emerge when you get too many people of a certain personality type in the EA team.

The Men in the High Tower

OK, so they’re not all men, but the really bad ones are usually male.  This is the corporate EA function. A collection of beard-stroking (told you they were men) intellectuals who set policy and guidance for all architectures in the organisation. They like to argue – often amongst themselves, but usually with project/programme architects. They tend to be somewhat detached from reality and don’t have much experience of actually building anything. I’ve encountered a few of these units over the years, and they seem to  consist of highly paid, bright people who’ve somehow managed to become architects without ever having done anything else. They are universally loathed by the coders and solution architects they dictate to. They are also made of teflon. No matter how stupid their decisions, and how obvious the link from those decisions to project overruns, they never get the blame. Some poor sucker in the project always has to take it on the chin.

 

Minecrafters

These guys love to dig. No problem is too small enough, no architecture is complete until it’s modelled the enterprise at a subatomic scale. They will spend every ounce of project budget modelling an as-is architecture that the project is intended to replace. This is sometimes called “analysis paralysis” and I’ve seen it more times than I care to think about. If you get enough of this personality type on a project (or even just one, in a leadership position) you’re in for a very expensive ride.

Steve Jobs, without the talent

These are very single minded individuals, so sure of their ideas, and so persuasive they get to bulldoze them past all technical objections. They’re great at persuading management, and they’re all about their ideas. Unfortunately, their ideas are often rubbish. That doesn’t seem to stop them though. Steve Jobs was said to have unbelievable tenacity and bullheadedness. He also had good judgement though, and he paid attention to the details. There aren’t that many people like that out there, only ones who think they are.

The Voice of the Business 

These are the user’s friends, there to defend what the user wants against those horrible naysaying techies !  Or are they ?  Maybe they just heard something in a user requirements session and wrote it down ? Maybe they thought it was a bit weird, but didn’t want to challenge the requirement because “the customer’s always right” ?

OK, so I’ve make up some really horrible caricature stereotypes, and emphasised some of their worse points.  But, nobody’s perfect, and my point is more that if you have an EA team made up of predominately one of these groups you’ve got trouble. Ideally you’d like a good mix of all those people, ready to challenge each other and get a balanced design that solves the business problem with affordable, reliable technology. You need the beard strokers to set policy, scan the horizon and specify the logical architecture. You need minecrafters to dig into some of the architectural problem areas, but you don’t want them doing deep dives on everything. You need the vision people with the ability to sell the ideas (not always their own ideas) to sceptical management, and you need good business analysts who are part of the design team rather than agents for a customer they don’t really understand.

If you don’t have all of those aspects, the EA function / project will be in trouble. Without the vision and salesmanship, the architecture will be ignored. Without the business analysis, the architecture will miss the point. Without the diggers, significant problems will be overlooked and workarounds will be needed. Without the beards, the architecture will be just business as usual. You need a balanced EA team. The projects I’ve seen fail over the years didn’t have that.

If you’ve got a lot of minecrafters with strong leadership, but you lack vision about what the goal is, or what the customer wants, then you’ve got a problem. Add a big budget to that, and you just might attain the holy grail of EA dysfunction –  monkeys at typewriters painting the Forth Bridge.

 

 

IS IT TIME? – DELIVERING END-TO-END BUSINESS SERVICE  

by George Davies, MooD International

After a number of years deploying Performance Management systems to help businesses and their leaders understand where they are and what is going on we are seeing an important shift.  Businesses are now changing, divesting, transforming and growing again. As part of this it is no longer enough to deliver IT services and solutions but it is also now necessary to understand and be able to ‘touch and see’ the business impact, business outcomes and business service delivered by such systems.

The ability to do this is changing the way internal services are delivered, the way external suppliers deliver service and the way multiple businesses work together to deliver a joined up, end to end business service and its related outcomes.

The shift that is now happening is one that is very important in achieving this.

Whilst a year or two ago it was enough to have ‘cool’ visualisations of data to start to get an understanding how a business is operating, where initiatives are up to and what is going on – this is no longer enough. It is not sufficient. It doesn’t deliver clear insight. It doesn’t signpost the direction of change – and importantly it doesn’t clearly link the activity that is under way with the achievement of the business outcome, the business service or the business result.

The shift that is occurring is that such systems have to be able to ‘understand’ the business domain, the Customer’s operating model, the business architecture of the programme or business, the cause and effect of changes in the business and the impact on other areas.

So, the change, which is fundamental that we are seeing is the role of the ‘business architecture’ or the ‘business operating model’ in such systems.  This is the layer that provides the detailed understanding of the business components that are important, how they relate to one another and the causal impact on a component over there when a change occurs over here. It is this layer that provides the intelligence, the insight into how the components of the business are connected, how they affect one another, what happens when changes occur – and importantly their impact up and down the chain, left to right, from projects, assets, infrastructure, risks, operational activity right up to the business outcomes, the business services or the business results.

Business performance systems require a ‘logic layer’ beneath the visualisation that is domain relevant, a layer that truly details the architecture of the organisation or company. A layer which ‘joins up’ the moving parts up and down the delivery chain – a layer that makes every activity, every IT system, every process relevant to the achievement of the business outcome, business service or business result.

This is the key.

MooD will be exhibiting at Integrated EA 2015

A Farewell to MODAF

Ian Bailey (@MonkeyChap)

As you will no doubt be aware, MOD plans to end its support for MODAF in favour of the new version of the NATO Architecture Framework (the acronymically challenged “NAF”). MOD has been very active in the development of NAF v4, sponsoring Team Ensure (including me !) to develop the standard based on existing NAF and MODAF documentation. It now seems that NAF will be a NATO STANAG, which is good news in many ways but does mean the process is slow.

Although MODAF isn’t gone yet, I thought now would be a good time to reflect on what we did as this is probably the last IEA Conference in MODAF’s life. IEA was established with the express purpose of starting a community of MODAF users. I think we did this, and what’s more I think we went further. The IEA community goes way beyond MODAF users, but I think everyone involved has a common view that taking a holistic view to business and technology, although difficult, is worth the effort.

MODAF itself, despite having many detractors, has thrived. It may not be as well known as DoDAF, Zachman or TOGAF, but its influence has been significant. It introduced capability modelling before any of the business journals started rabbiting on about it, and long before the other architecture frameworks adopted the approach. It had a formal meta-model when none of the others did. And that meta-model, M3, has gone on to be used in UPDM (90% identical to M3) which it now seems is the US DoD’s favoured approach. MODAF spawned TRAK, which was free of the legacy user base MODAF had, and so was able to do a lot of things we’d been trying to do in MODAF for a long time. NAF, from v3 onwards has been based on MODAF, and most NAF users still use the MODAF documentation when developing their frameworks. Even DoDAF saw fit to adopt the MODAF capability and strategic views. Oh yeah, GCHQ and NATS use it too. Despite all this, MOD never really seemed to know what to do with its little success story, and MODAF really only survived because the sheer enthusiasm of its users.

MODAF had a tough birth. It came into existence with the purpose of extending the DoDAF specification to support the MOD acquisition approach. The work began just as MOD was fighting two expensive conflicts in far-flung locations. Funding was always tight, and priorities were always elsewhere. Inevitably, MODAF v1 was a compromise, but many of the problems were fixed in v1.1.

In the ten years MODAF has been around, I’ve seen MOD’s attitude to architecture change drastically. Initially, there was immense push-back from the military, and cash-strapped IPTs saw it as just another box-ticking exercise to waste their time. Some early projects spent large amounts of money, enthusiastically trying to model the world only to find the world had spun around since they started the modelling. After a number of failures, there was a time when it seemed no-one would ever consider architecture again. Throughout all this though, there was a cadre of individuals who kept it going, and by applying architecture approach sparingly and sensibly, started to make a difference. I’m pleased to say they were nearly all IEA regulars. I don’t think anyone in their right mind would try to procure a complex system today without some sort of architecture, but back in 2005 few if any were architected.

Looking to the future, I’m really pleased with NAF v4. My hopes for it being a website-based standard that could adapt quickly to changing requirements have been somewhat dashed by the requirement for a STANAG, but you can’t have it all, and the clout that a STANAG brings will more than make up for it. I’m still jealous of the freedom Nic had when he developed TRAK, and I still like the idea of an open-source framework.

MODAF was the work of many people over the years. Working with them has been thoroughly enjoyable, frequently stressful, and often humbling. There were some truly spectacular technical arguments, and some fantastically talented people involved (those two things tend to go together). Architects like an argument. People who architect architecture frameworks are able to bootstrap an argument out of thin air. I don’t ever recall a pointless argument though – every one of them was driven by a passion to build something that was right. I think NAF v4 is going to be better than MODAF ever was. They just need to do something about that acronym…

Keith Hasteley of MOD ISS will be at IEA to talk about NAF v4. I hope to see you all there.

There’s Madness in your Method

I wrote this last year, prompted by Penny asking me write something for the website that might get people talking. After I wrote it, I thought it was more likely to annoy than stir up debate. However, @martinfowler just wrote this, which I liked, probably because he’s better at presenting a coherent case than I am. Here’s my attempt…

As a consultant, I tend to have adapt to the customers’ ways of working.   Sometimes that even means I have learn something new. I had been considering going on a TOGAF course for a number of years now. Fear of getting into a public argument with the trainer over something pedantic like a meta-model issue or the exact meaning of “Service” has always prevented me though. Over the last few years, I have found myself working amongst TOGAF qualified architects, and it is this what is worrying me.

I’ve always had a bit of a problem with methods. Maybe it’s an aversion to being told what to do, or just a misplaced, egotistical belief that I probably know better. Either way, methodologies suck. Big time. They all look logical enough, and they’re hard to pick fault with because they’re based on best practice and the collected wisdom of experts in their field. Collected wisdom, renowned experts and logic have never stopped me having a pop at something I don’t like though, even if I can’t rationally explain why I don’t like it.

Firstly, completing a course and ticking the right multiple choice options doesn’t mean you know what you’re doing. Government and industry are littered with MSP and PRINCE2 qualified contractors. Many of them are clearly in the wrong job and some are utterly incompetent – evidenced by the eye-popping number of failed programmes and cost overruns. We’ve all met them, and we’ve all wondered how they manage to even dress themselves in the morning, never mind find their way to work. I wonder if Google pay their PMs four times what they pay their best coders ? Actually, I wonder if Google have ten times as many managers as they do coders on their software projects ?

Secondly, I’d rather have one person who can think for themselves than five people who are following a methodology. Zombies at typewriters aren’t going to produce the complete works of Shakespeare. They might well come up with a new methodology though, and the accompanying manual (with sample exam questions) will probably sell more copies than King Lear.

Thirdly, it lets HR off the hook, and if there’s a bunch of people in the organisation who you should never, ever let off the hook, it’s HR. They will always take the path of least resistance in finding candidates, and their natural tendency to cover their own arses means they’ll always choose candidates with a certificate of some sort no matter how experienced they aren’t.

Fourthly, these methodologies always seem to duck the difficult bits. They’re much more likely to tell you how to run an architecture / development / engineering team than tell you how to build something useful.

Finally, I think methodologies hurt the trades they are supposed to serve. The certification schemes are highly profitable, and it is not unusual for hundreds of candidates to go through them every year. This floods the employment market with a wide spectrum of abilities all concealed under a common qualification. Couple that with HR’s love of certificates and you get huge numbers of people working in roles they are unable perform. That hurts everyone. It tars the able practitioners with the same brush. It blots the career of people who have tried to take on too much without enough experience. And, it diminishes the reputation of the trade as a whole.

@MonkeyChap

Taming the badlands of the IoT

Dr. G. J. Bleakley,  graham.bleakley@uk.ibm.com

The Internet of Things (IoT) is understood and perceived by many as the use of internet enabled devices that can be connected to the world wide web so that you can monitor and control “things” remotely through your smart phone or other mobile devices. Typical applications are house temperature monitoring and control, remote programming for recording  of television programmes and managing personal comfort settings in motor vehicles.

The other view of the IoT is very different, it is about using connected devices to gather information. The information that is collected is analysed (by applying techniques such as predictive analytics) and used to understand the behaviour of the thing being monitored and can, as a result, be used to control other things . Companies such as Facebook, Amazon and Google are masters at this (ref Sterling, The Epic Struggle of the IoT).

What is happening now is that ideas described above are being combined. The information that is being generated from the “Things” that we use is being analysed and used to control and/or improve the functioning of the “Thing” or some related service. These concepts are now being applied within industrial domains and across local/national governments as well as the consumer. Organisations have realised that the information that now own has real value that can be monetised in ways that they never considered before.

Developers are creating IoT based solutions without thought as to how they can be used in the bigger picture, in effect creating solutions for problems that do not exist yet and without considering the true business value that they can bring. Suddenly the IoT has become a much larger wilder space and it is filled with danger (hence the Badlands of the title).

A good example of the Industrialised IoT (IIoT) is the real time capturing of data from vehicles and then using this data to improve the development process and response to potential customer issues. The data being captured is not just vehicle speed or braking, but devices are monitoring many different aspects of the vehicles behaviour. In this instance, capturing engine noise, doing some inline spectral frequency analysis and then sending it into the cloud to be further analysed on a server to identify engine faults. If an engine fault is identified a number of things could then happen, depending on the fault.

  1. A communication is sent by text message or displayed in the vehicle itself to notify the user to present the vehicle at a garage.
  2. The analysis identifies that the vehicle could be repaired by changing a software parameter so it updates the vehicle when it is parked.
  3. The analysis becomes part of a wider picture, identifying a trending issue in the vehicle and other vehicles. This could trigger a recall and a also raise an defect request that needs to be addressed with the next iteration of the vehicle.

This is just one of many examples of the industrialisation of the IoT and shows the power of what is now possible and also highlights some of the dangers. Issues such as Security of the vehicle and information, timely and lossless transmission of data (both directions), ensuring that the analysis is correct, managing the amount of data (many gigabytes per hour) etc.

For this to work successfully it needs an approach that goes beyond the way that the early IoT applications were developed. The current approach is that devices are developed that tap into existing systems and we have been controlling them ourselves via mobile devices. This is a very unregulated world.

An example that starts to show the true complexity and scope of IoT development is a washing machine that turns itself on at night when there is a low electricity tariff. On the face of it this sounds like a simple application. The reality starts to come when you scale this upto a city of say 10 million people, this could lead to 2-3 Million washing machines that could all come on at 3 AM in the morning causing a huge draw on the electricity resources. If the power company could capture the location of the machines that were going to be turned on then it could potentially schedule them to run to even out and regulate the available supply. We have gone from a simple washing machine to something that is affecting the management and control of power distribution for an entire city.

To be able to create an IoT based system to implement the above example requires a much more discipline approach than what is currently being done. This is going far beyond simple IoT applications. It needs frameworks and standards for how these systems are specified, designed and developed. Because of the size of the infrastructure and complexity required to sustain a system such as that described above it really needs to be developed using a Enterprise Architecture and Systems Engineering approaches based on Services.

The organisation that is currently doing a lot of work in this area is the Industrial Internet Consortium (IIC), a group of companies who are coming together (under a year old,  already 100+ members) under the leadership of IBM, Intel, GE, Cisco and AT&T to define the way forward for the IIoT.

Part of the work of the IIC is to define a reference architecture and a framework to support the design and development of IIoT architectures and application. This should help to tame the “Bad Lands” of the IIoT as it provides a means to:-

  • Capture common business models;
  • Usage scenarios of where and how IoT technology is expected to be used
  • Common functional behaviour
  • Identify potential technical implementations for the functional behaviour

The framework consists of 4 views,

  •  A business view, to capture the motivation behind the vision and enable business leaders to rationalise decisions behind the need for an IoT based solution and identify the capabilities needed to support the vision.
  • A Usage view, used to capture typical roles, activities and scenarios to help understand how people and legacy systems would expect to interact or use the IoT based solution.
  • A functional view, used to capture the functional or logical architecture that could be used to specify the systems.
  • An implementation view that provides guidance to the sort of technology that could implement the system.

The nature of these large IoT systems is that they need to be highly adaptable and tolerant of changes in technology, so when it comes to architecting such a system the one best ways to consider specifying it is by using a service based approach.

So how can this framework be implemented ? Currently no tool maps directly onto the framework as it has not been formalised as a concrete metamodel. But the principles behind it are rooted in MODAF and DoDAF. It is a layered approach that maps very well onto the MODAF/DoDAF viewpoints and the traceability between the viewpoints. Also many of the elements identified in the IIC framework resonate closely with elements in MODAF/DoDAF, from Vision, Goals, Capabilities, through to Roles, Activities, Services etc thus giving us a means to understand these systems using current technology and thinking.  We do not need to reinvent the wheel yet again.

To find out more come to the Integrated EA 2015 Conference.

 

The First Enterprise Architects?

By Sally Bean, Enterprise Architecture specialist

A few months ago the Radio Times published a photo of a group of people gathered around a map table in a Battle of Britain operations room to promote a programme called  Castles in the Sky. This programme was about Robert Watson-Watt’s team and their challenges in developing the radar technology that fed the Battle of Britain control system (also known as the Dowding system) with data. This film attracted some interest because the comedian Eddie Izzard had been picked to play Watson-Watt. But my attention was also drawn to that fine actor Alex Jennings in the part of Henry Tizard, the scientist who played an important role in the overall design of the control system. Others involved were Dowding himself, Patrick Blackett the ‘Father of Operational Research’ and Keith Park who helped to operationalise the system and executed it to great effect.  As far as I’m aware this story has not been turned into a drama, but a good account of it can be found in Checkland and Holwell’s book on Information Systems

As is well known, this information system was the uniquely differentiating factor that enabled the RAF to deny air superiority to the Luftwaffe in 1941 by executing a clearly articulated strategy to target and intercept enemy bombers.  Data was collected from a range of sources, filtered and organised into information, which was then passed down the chain of command.  Decisions could be made at increasing levels of detail about which sectors and squadrons to deploy and how to direct the aircraft towards the enemy.  The system managed to operate in near-real-time using primitive, but well-designed analogue technology to transmit and display information, since computers had not yet been invented.

What has this got to do with modern enterprise architecture? Obviously, it’s a good example of the ability of a new technology to completely transform strategy and operational procedures. But more importantly, it’s also a brilliant demonstration of a point recently made by Ian Bailey in the LinkedIn group  for this conference that “we are an information business”.   It was the way in which RAF Officers and Operational Research scientists collaborated closely to design an Information System to support the proposed strategy and tactics that provided the winning edge to Britain.  The strategic principles underpinning the mission and the aircraft activities to be carried out to achieve it guided the design of the information system elements and data flows. These in turn guided the effective social and scientific design of the roles, communication channels, algorithms and physical artifacts required at different levels to make the best operational decisions and communicate instructions rapidly. This is in contrast to the laundry-lists of user requirements for technology solutions that are often collected today with insufficient opportunities for coordination across different groups.  Also everyone from Dowding downwards was deeply embedded in the design process, rather than handing off the real thinking to consultants, as often tends to happen today.  As Churchill put it, ‘the system had been shaped and refined in constant action’.  As a result, aspects like data quality and cleaning were introduced at an early stage.  Significant effort was also put into the ‘people’ aspects of the system, such as selecting WAAF plotters with the right personality and skills, and gaining the pilots’ trust by encouraging them to visit the control centre and see the system for themselves.

A good set of high-level conceptual models of an enterprise has the potential to clarify strategic intent,   and explore the decisions, data sources and communication channels that are required to execute effectively and coherently.

As it happens, my own experience of enterprise architecture has some features in common with this story.  I was fortunate enough to work for a large airline that had a mature operational research group.  A common mode of working was to take an area of pain or opportunity and put together a multi-disciplinary team with OR and IT expertise and business subject matter experts to explore the issues, facilitated by an architect. We would develop some high-level models of the area and use them to think about business problems, changes and opportunities. The OR people brought strong skills in the dynamic complexities of the operation, rather than the static structures that traditional EA tends to focus on.  We then assessed the current IT and future opportunities and mapped out a coherent portfolio of projects to change things or improve performance. Frequent contact with senior executives was maintained during these studies so that they could inject a strategic perspective and had confidence in our findings.

This was in the days before enterprise architecture frameworks were widely used, and so we were quite keen to adopt the use of the Zachman Framework and TOGAF when they emerged. However, in my experience, existing EA frameworks are not strong enough in the information/decision space to adequately address the problem of designing effective management structures, or managing and exploiting information in support of action in a dynamically complex world.  It is therefore good to see the increasing popularity of systems thinking concepts as part of the EA toolkit and also the emergence of an ‘Enterprise Design’ movement that integrates Purpose and Mission with Architecture and Experience.

 

DISCUSS THIS AND OTHER BLOG POSTS BY JOINING OUR LINKED IN GROUP

Models that clients can use

By Patrick Hoverstadt, Fractal

Before I was a consultant, I had a “proper job”. This involved persuading recalcitrant pieces of steel to be a different size and shape to the one they naturally wanted to be and often this was in the production of architectural steelwork to match the designs of architects. These were “real architects” designing real buildings. I remember one contract and a meeting with the architect, the client and the head of the building firm and as often happened, the drawings didn’t totally match the reality on the ground. The problem was that if I made the steelwork to the drawings it wouldn’t really fit – or at least something would need butchering to make the steelwork fit the rest of the building. The architect could see the problem, I could see the problem, but neither the client, nor the builder could see the problem at all. And that was because they couldn’t read the drawings and understand the implications for their translation into a three dimensional curved structure. The drawing approach used was fine for a technical specialist to interpret, but hopeless for a client. The client simply could not look at the drawing and make sense of it, could not understand from the drawing what the building was going to be like, whether it was going to be light and airy, or claustrophobic. The drawing told the client next to nothing about what he was getting, not because it was wrong, but because it was indecipherable – you could see their eyes glazing over as they looked at it.

And from architects of buildings to architects of enterprises…

Although I am not an enterprise architect, I do have a couple of things in common with EAs. First is that I design organisations for a living and have done for 20 years – since I gave up the “proper job”. The second thing in common is that like EA, we design organisations using a modelling technique that is totally indecipherable to the vast majority of clients. In our case we use systems and cybernetic models of organisation which map out structures of: operations, value creation, performance management, decision making, change and the information and communication flows between them. The models we build tend to look like slightly weird wiring diagrams and because we tend to get asked in to model quite complex organisations, they can be quite complex weird wiring diagrams. For a long time, this indecipherability was a barrier to communicating with clients, but it was just something we took for granted and had learned to live with. We’d do the modelling away from the client and then translate the results into something they could understand.

And then we developed a new modelling format – which was designed to be accessible to a non technical audience. It lacks the look and some of the sophistication of the “wiring diagram” format, but it has the advantage of being a model of their organisation that management teams can actually work with. The difference this accessibility makes can be dramatic. In one client that we’d tried for several years to sell organisation redesign to, merely leaving a copy of the new accessible model on the table was enough – the director of transformation took one look at it and pronounced “I want one of those!”. In another multi-national organisation, the model became a coveted artefact for senior managers and strategic decision makers. How many EA models get used for developing business strategy, or to enable senior managers to do their own organisational redesign?

For us part of the challenge for EA is to break away from developing models that are only ever developed and used by enterprise architects. As long as EA focuses primarily on building models for itself rather than ones the business can use to solve business problems it must struggle for the credibility to influence strategic decisions.  Conversely, developing a visualisation of the organisation which becomes the touchstone for key business decision takers is a way that EA could use to increase its influence and contribution.