Category Archives: Governance

Looking Back on Year 4

This post is part of a fond farewell to the University of Bristol as I have now finished a very enjoyable 4 years as Enterprise Architect there and I have just moved on to the role of Senior Enterprise Architect at the University down the road – UWE! UWE are building quite a significant EA capability as from this year, recruiting 3 EA’s in one go, led by an Assistant Director of IT dedicated wholly to Strategy and Enterprise Architecture. It was an opportunity I couldn’t turn down, but it was a difficult decision to leave the University of Bristol and I will really miss my fantastic colleagues there. IT Services at Bristol has recently recruited an excellent new CIO, Darrell Sturley, and I think he will help improve all aspects of IT considerably over the next few years. I will watch with interest!

So, back to the real subject matter of this final post on my blog: what did we achieve through Enterprise Architecture in my fourth year in the role of Bristol’s EA? The two key achievements of this year were the successful launch of Master Data Governance at the University and the beginnings of a robust Technical Design Authority capability from within IT Services. I’ve blogged quite a lot about Master Data Governance so I will focus here on the major aspects I put effort into to enable governance of our IT Architecture – through a Technical Design Authority.

Technical Design Authority – how?

To establish a Design Authority capability I focussed less on ‘who’ should be ‘the’ authority figure, saying yea or nay to all proposed IT implementations and standards, and instead on how such a design authority should behave at the University. Or, put another way, I believe the tools and processes are as important as the roles and governance structure put in place for the TDA. I designed a process (including a form for people to complete) through which all IT Change Proposals can be submitted – big or small, and whether originating from within IT Services or from elsewhere (for example through the University’s business case approval process). But I was particularly concerned with the appraisal process being based not on educated guesses, personal views, gut feelings, or even how well presented a proposal might be, but instead on a formal evaluation of the proposal against three things: 1. clearly specified IT Architecture Principles, 2. an understanding of all Architectural Dependencies, and 3. agreed Target Architectures.

For this to work, documentation is needed. Over the year we created the following resources to use in the TDA process.

1. IT Architecture Principles

I was given a Technical Architecture team to work with in the last year of my role and this great group of technical experts and I came up with a set of draft principles that we felt were particularly pertinent to the University at this time:

IT Architecture Principles

 

 

 

 

 

 

 

 

These top 8 drill down into sub principles, but we wanted this small number of summary principles so as not to bamboozle people submitting IT proposals! I should say that they were linked to higher level, overarching principles published by IT Services more generally. I am a big fan of how TOGAF encourages the writing of principles, so we then articulated each principle more precisely. I’ll give one example here (further articulation of the third principle in the list above):

Architecture Principle 3

 

 

 

 

 

 

 

 

 

It is interesting to note that the implications can often result in actions that need to be taken! Implication 3. is particularly relevant to the Design Authority decision making process.

2. Architectural Dependencies

It is all too easy for, say, someone in the Server Infrastructure team or the Databases team, to come up with an architectural improvement without appreciating the knock on effect on the applications architecture. Or vice versa. For example, upgrading to the latest release of Oracle or SQL Server would seem to be a good idea, but clearly not if there are business applications which for ever reason aren’t yet compatible with it and which might stop functioning correctly. Similarly, one area of the University might think it a great idea to purchase a new Web Portal to surface content from their system on the Web without realising they are duplicating another Web view of information offered via an existing system – ultimately creating an ill-planned web experience for visitors to our website and potentially confusion all round. I have encountered many many examples of where a lack of an holistic understanding of our IT architecture has allowed for bad decision making.

To counter this we drew up Archimate diagrams of our systems architecture. The Technical Architecture team and I received excellent training from Bizzdesign back in January in order to ensure we had several experts within IT Services able to maintain this documentation centrally going forward. We mapped out our entire infrastructure (virtualised and non virtualised server environments, storage, network, backup services, databases and so on). Then we mapped our critical services to it. Initially we used the free tool Archi to analyse our architectural dependencies, here’s an example (click to enlarge):

Explaining Archimate Tool Advantage

 

 

 

 

 

 

Then, after explaining to the Senior Management Team how valuable it is to be able to understand which architectural components are interrelated to others using a fuller-featured product, we were given the funding to purchase Bizzdesign Architect licenses. I explained that with good documentation we were also able to understand more about dependencies within our architecture generally – for example here I can show that our student administration system depends on a lot of infrastructure components; we can show very simply via the diagram that if the linux virtualisation platform supporting the web farm should fail, then the web client to the system (SITS:eVision) would fail too. However, the desktop client (SITS:Vision) would be unaffected so certain users would still be able to access the system via this.

SITS archimate diagram depenences

 

 

 

 

From a design authority point of view, it is valuable to use these architecture documents to ask pertinent questions about the potential for undesirable knock on effects that might occur if a particular IT change is approved.

3. Target Architectures

We could draw out target architectures using Archimate – these are useful. But I wanted something more accessible for those not familar with that language. I follow work by ITANA and decided to go with the Architecture Bricks approach they were evaluating at the time.

I developed a template that the NIH in the US were using at the time:

Brick Template

 

 

 

 

 

 

 

 

and the Technical Architecture team and I produced bricks for 20 major architectural areas we support. Here’s an example:

Storage brick

 

 

 

I ran a session with the IT Senior Management Team where we reviewed the 20 bricks together. It revealed key issues such as the number of retirement targets we had not planned for so far, and the number of – thus far uncoordinated – plans to move to Cloud services.

From a Design Authority point of view, once they have been evaluated holistically and approved centrally, the Bricks become ‘the bible’. They describe future plans and containment/retirement targets. Therefore, for example, if a proposal is offered that attempts to build a new service on something we’ve marked as a retirement target we can have a fully informed conversation about whether to recommend against that.

In this respect, the Design Authority isn’t functioning to try and stop all IT plans that run counter to the existing plans. Instead it is about making a clear and informed response about the implications to the organisation of deviating from the well thought out IT architecture plan.

So that’s my top 3 contributions to the establishment of a Design Authority capability at the University. There are several other governance aspects of course, but as I said earlier, I felt it was key to reduce ambiguous decision-making by having a clear decision framework within which to operate first and foremost.

Thank you for taking the time to read my blog and to all the colleagues from Bristol and around the sector that I’ve had the pleasure to work with in my role… I hope to work with many of you still as I stay in the HE sector in my new role! Bye for now!

 

 

 

 

 

Spaghetti Grows in System Architectures – not an April Fools’ Day joke

A replay on breakfast TV this morning of the well known Panarama hoax (1st April 1957) reminded me of the mission we’re on at Bristol to “turn spaghetti into lasagne”. This mission is number 7 on the JISC 10 pointer list for improving organisational efficiency: spaghetti refers to the proliferation of point-to-point (tightly coupled) integrations between our University’s many IT Systems and lasagne refers to the nicely layered systems and data architecture we’d like to achieve (see elsewhere in this blog).

However, transforming our data architecture overnight is not achievable, instead we’ve developed a roadmap spanning several years in which reform of our data architecture fits into the wider contexts of both Master Data Management and Service Oriented Architecture.

In November last year our senior project approval group (now known as the Systems and Process Investment Board) agreed to resource a one year Master Data Integration Project. We will return to the same board early in 2015 with a follow on business case, but this year’s project is concerned with delivering the following foundation work.

  • The establishment of Master Data governance and process at the University (the creation of a Master Data Governance Board and the appointment of Data Managers and Data Stewards as part of existing roles throughout the University – responsible for data quality in their domains and for following newly defined Change of Data processes),
  • Completion of documentation of all the spaghetti (aka the integrations between our IT systems) in our Interface Catalogue, and also the documentation of our Master Data Entities (and their attributes and relationships) in an online Enterprise Data Dictionary (developed in-house),
  • Development of a SOA blueprint for the University, including our target data architecture. This with the help of a SOA consultant and to inform the follow on business case for SOA at Bristol, which we hope the University will fund from 2015.

We are undertaking this work with the following resources: Enterprise Architect (me) at 0.3FTE for a year, a Business Analyst (trained in Enterprise and Solutions Architecture) at 0.5FTE, a Project Manager at 0.3FTE, IT development time (both for developing the Enterprise Data Dictionary and for helping to populate the Interface Catalogue with information) and approximately £60K of consultancy.

We had some very useful consultancy earlier this year from Intelligent Business Strategies: several insightful conversations with MD, Mike Ferguson, and a week with Henrik Soerensen. From this we were able to draw up a Master Data Governance structure tailored to our organisation, which we are now trialling. Master Data Governance Structure v3
This work also helped us to consider key issues around governance processes and how to capture key information – such as including business rules around data – in the online data dictionary.
Later this year we will be working for an extended period with an independent SOA consultant based in the South West, Ben Wilcock of SOA growers. We have already worked with Ben in small amounts this year and I am very much looking forward to collaborating with him further to develop our target data architecture (most likely a set of master data services, supporting basic CRUD operations) within the context of a SOA blueprint for our enterprise architecture.

Looking back on Year 2

As a follow on to my blog post that reflected on year 1 of EA at Bristol (https://enterprisearchitect.blogs.bristol.ac.uk/2012/04/17/looking-back-on-the-first-year-of-my-ea-role-at-bristol/), here’s a summary of the top three key things I covered in year 2:

 

  • Embedding EA within the Governance Structure at Bristol. One problem I wanted to tackle was that of how to ensure that EA is a formal consideration whenever new IT-related projects are proposed at Bristol, whether that should involve in-house IT development or the procurement of third-party systems. Until this second year, EA was being acknowledged as valuable at Bristol, but there was no place in the project approval process at which the Enterprise Architect could make a clear recommendation based on an appraisal of some new project’s fit with Bristol’s business architecture, information architecture, and technical architecture. When I started in my role I was invited to be part of a group called the System Programme Managers Group (SPMG) which met monthly to review current and proposed projects, and which was comprised of managers leading various University programmes (IT, Education, Planning etc). I ran a workshop with this group entitled “Capacity for Change” and I talked about how we should review potential projects in terms of what priority they should have in the already packed project portfolio that the University was funding. This, in combination with some senior level initiatives culminated in the renaming of the SPMG to the “Portfolio Management Group” (PMG). Its new remit is now to support the higher level Portfolio Executive group more directly by making recommendations on any new project’s fit with the strategic priorities of the University. We have aligned the strategic priorities with the benefits mapping work I mention above, and each new business case is now submitted with a PMG recommendation, describing whether we have existing resources to support the proposed project, its priority level, and a statement of its fit with the enterprise architecture – hurray! So, this is not to say that Enterprise Architecture considerations now dictate decision-making, just that it has become formally recognised as part of the project approval governance process; a positive step forward.

 

  • Developing the SOA roadmap for Bristol. After the workshop for the Portfolio Executive, mentioned above, I submitted a Stage 0 business case (entitled Master Data Integration Framework) which was approved. The next step is to develop the Stage 1 business case, for presentation to the Portfolio Executive this Autumn, and for this I am developing the roadmap, with an indication of costs and benefits along the way, in consultation with others. For this work I have maintained a completely separate blog, see: http://coredataintegration.isys.bris.ac.uk/category/final-project-report/ for more information.  The initial roadmap I produced for Bristol can be summarised as follows,

Step 1. Make the initial business case for Service Oriented Architecture (SOA) (even if it is not explicitly called ‘SOA’ at this stage) to achieve senior level understanding of the importance of a good data architecture blogbusinesscaseand why investment in this apparently invisible middleware layer will bring benefit to the institution.

 

Step 2. Complete the data dictionary and the interface catalogue to at least 80% for all master data system integrations.

Blogdatadictionaryblogintcatalogue

See my separate blog for why this is important: http://coredataintegration.isys.bris.ac.uk/2013/06/16/important-documentation-for-soa-the-interface-catalogue-and-data-dictionary/. The benefits at this point are:

  • Faster (cheaper) implementation of new IT systems (quicker to assess impact of a system change on other systems)
  • Faster (cheaper), more reliable Business Intelligence/Operational report production

Step 3: Introduce data governance.

blogdatasteward

By introducing data stewards who will be responsible for ensuring that no master data system changes in data structure ‘go live’ before a check has been done against the interface catalogue and the data dictionary (through a formally defined process), then the benefit we introduce at this point is:

  • A more sustainable IT architecture and a higher guarantee that business processes will function without disruption. This is due to data being managed as an institutional asset and ongoing changes to master data structures being managed in a controlled way as they propagate throughout our IT system landscape (avoiding system and process failure and/or damage to our organization’s reputation)

Step 4: Analyse the interface catalogue and develop a logical data model from the data dictionary. Our interface catalogue already reveals the high percentage of point to point interfaces that replicate very similar data synchronisation tasks between IT systems. We can at this point make recommendations regarding how we can reduce this percentage dramatically and refine the integrations into a minimal number of key data services to share master data across the systems that require it. This is where we relate SOA planning to the business architecture of the University, with an appreciation of how data is used across the student and research lifecycles. This step therefore requires business analysis as well as technical systems analysis.

The benefit of doing it is that we are able to describe more specifically the cost-savings and agility that the University stands to gain if it invests in consistently applied SOA technology. We can note that Gartner’s research demonstrated that 92% of the Total Cost of Ownership – http://www.gartner.com/it-glossary/total-cost-of-ownership-tco/ – of an average IT application, based on 15 years of use, is incurred after the implementation project has finished. A significant part of those costs will be concerned with maintaining the application’s seamless integration within the organisation’s application architecture, so by simplifying this we should save costs over time.

Naturally, the other purpose of doing this step is so that we can analyse the “As Is” architecture and design the “To Be” architecture – a precursor for step 5.

Step 5: Adopt a SOA technology solution, train IT developers to use it.

blogbluering

The analysis from Step 4 should have provided us with the basis from which to design the data integration architecture we need. The ‘blue ring’ diagram is not a formal systems architecture diagram by any means, it is simply used to roughly illustrate the concept of a master data systems architecture through which we decouple systems requiring master data from connecting direct to the master data systems themselves (avoiding the proliferation of point to point systems integrations that we suffer from currently). We should have learned through step 4 where we need real-time data, near real-time data and where real-time data is not essential. We can therefore look for the most key processes that we need to support via ESB technology and where, in other places, nightly refreshes of data will ‘do’. We can prioritise which master data services we want to implement in which order, using our favoured SOA technology (whether we go the opensource or commercial route). We can decide if we want to keep and improve our operational datastore and we can determine carefully whether we deliberately want to maintain some point-to-point, database-database integrations (i.e. analyse where in fact, web services, say, will not offer us greater benefit if we acknowledge the trade off between maintaining existent Oracle database to Oracle database connections versus the possible reduction in performance if we deploy data service intermediaries).  We need a dedicated team of trained developers to monitor and operate the data integration architecture according to standards we define (SOA governance), to understand how to optimise data caching within it and so on. At this point the benefits we expect to achieve are:

  • Cheaper, less complex to maintain, standardized master data integration architecture (blue ring).
  • Higher guarantees of data security & compliance (data services to have Service Level Agreements and data security to be built in from the outset)
  • Agility to respond quickly to Cloud and Shared Services opportunities
  • Quicker to respond to changes in external reporting data structures (KIS, HESA, REF etc.)

Step 6: Train our IT developers and Business Analysts to work together using a standard set of skills and tools based around BPEL, BPM etc.

At this point we expect to reach the level of SOA sophistication where we have the ability to orchestrate and optimize end to end processes that share and manipulate data, creating efficiencies and business agility across the institution.
Suffice it to say, we are some years off Step 6! We are currently working on steps 2-4  above. This Autumn I am running a knowledge exchange workshop for HEI’s working on SOA roadmaps to gather and compare and contrast their plans, successes and lessons learned to date. If you are interested in coming, please contact me directly.