Looking Back on Year 4

This post is part of a fond farewell to the University of Bristol as I have now finished a very enjoyable 4 years as Enterprise Architect there and I have just moved on to the role of Senior Enterprise Architect at the University down the road – UWE! UWE are building quite a significant EA capability as from this year, recruiting 3 EA’s in one go, led by an Assistant Director of IT dedicated wholly to Strategy and Enterprise Architecture. It was an opportunity I couldn’t turn down, but it was a difficult decision to leave the University of Bristol and I will really miss my fantastic colleagues there. IT Services at Bristol has recently recruited an excellent new CIO, Darrell Sturley, and I think he will help improve all aspects of IT considerably over the next few years. I will watch with interest!

So, back to the real subject matter of this final post on my blog: what did we achieve through Enterprise Architecture in my fourth year in the role of Bristol’s EA? The two key achievements of this year were the successful launch of Master Data Governance at the University and the beginnings of a robust Technical Design Authority capability from within IT Services. I’ve blogged quite a lot about Master Data Governance so I will focus here on the major aspects I put effort into to enable governance of our IT Architecture – through a Technical Design Authority.

Technical Design Authority – how?

To establish a Design Authority capability I focussed less on ‘who’ should be ‘the’ authority figure, saying yea or nay to all proposed IT implementations and standards, and instead on how such a design authority should behave at the University. Or, put another way, I believe the tools and processes are as important as the roles and governance structure put in place for the TDA. I designed a process (including a form for people to complete) through which all IT Change Proposals can be submitted – big or small, and whether originating from within IT Services or from elsewhere (for example through the University’s business case approval process). But I was particularly concerned with the appraisal process being based not on educated guesses, personal views, gut feelings, or even how well presented a proposal might be, but instead on a formal evaluation of the proposal against three things: 1. clearly specified IT Architecture Principles, 2. an understanding of all Architectural Dependencies, and 3. agreed Target Architectures.

For this to work, documentation is needed. Over the year we created the following resources to use in the TDA process.

1. IT Architecture Principles

I was given a Technical Architecture team to work with in the last year of my role and this great group of technical experts and I came up with a set of draft principles that we felt were particularly pertinent to the University at this time:

IT Architecture Principles









These top 8 drill down into sub principles, but we wanted this small number of summary principles so as not to bamboozle people submitting IT proposals! I should say that they were linked to higher level, overarching principles published by IT Services more generally. I am a big fan of how TOGAF encourages the writing of principles, so we then articulated each principle more precisely. I’ll give one example here (further articulation of the third principle in the list above):

Architecture Principle 3










It is interesting to note that the implications can often result in actions that need to be taken! Implication 3. is particularly relevant to the Design Authority decision making process.

2. Architectural Dependencies

It is all too easy for, say, someone in the Server Infrastructure team or the Databases team, to come up with an architectural improvement without appreciating the knock on effect on the applications architecture. Or vice versa. For example, upgrading to the latest release of Oracle or SQL Server would seem to be a good idea, but clearly not if there are business applications which for ever reason aren’t yet compatible with it and which might stop functioning correctly. Similarly, one area of the University might think it a great idea to purchase a new Web Portal to surface content from their system on the Web without realising they are duplicating another Web view of information offered via an existing system – ultimately creating an ill-planned web experience for visitors to our website and potentially confusion all round. I have encountered many many examples of where a lack of an holistic understanding of our IT architecture has allowed for bad decision making.

To counter this we drew up Archimate diagrams of our systems architecture. The Technical Architecture team and I received excellent training from Bizzdesign back in January in order to ensure we had several experts within IT Services able to maintain this documentation centrally going forward. We mapped out our entire infrastructure (virtualised and non virtualised server environments, storage, network, backup services, databases and so on). Then we mapped our critical services to it. Initially we used the free tool Archi to analyse our architectural dependencies, here’s an example (click to enlarge):

Explaining Archimate Tool Advantage







Then, after explaining to the Senior Management Team how valuable it is to be able to understand which architectural components are interrelated to others using a fuller-featured product, we were given the funding to purchase Bizzdesign Architect licenses. I explained that with good documentation we were also able to understand more about dependencies within our architecture generally – for example here I can show that our student administration system depends on a lot of infrastructure components; we can show very simply via the diagram that if the linux virtualisation platform supporting the web farm should fail, then the web client to the system (SITS:eVision) would fail too. However, the desktop client (SITS:Vision) would be unaffected so certain users would still be able to access the system via this.

SITS archimate diagram depenences





From a design authority point of view, it is valuable to use these architecture documents to ask pertinent questions about the potential for undesirable knock on effects that might occur if a particular IT change is approved.

3. Target Architectures

We could draw out target architectures using Archimate – these are useful. But I wanted something more accessible for those not familar with that language. I follow work by ITANA and decided to go with the Architecture Bricks approach they were evaluating at the time.

I developed a template that the NIH in the US were using at the time:

Brick Template









and the Technical Architecture team and I produced bricks for 20 major architectural areas we support. Here’s an example:

Storage brick




I ran a session with the IT Senior Management Team where we reviewed the 20 bricks together. It revealed key issues such as the number of retirement targets we had not planned for so far, and the number of – thus far uncoordinated – plans to move to Cloud services.

From a Design Authority point of view, once they have been evaluated holistically and approved centrally, the Bricks become ‘the bible’. They describe future plans and containment/retirement targets. Therefore, for example, if a proposal is offered that attempts to build a new service on something we’ve marked as a retirement target we can have a fully informed conversation about whether to recommend against that.

In this respect, the Design Authority isn’t functioning to try and stop all IT plans that run counter to the existing plans. Instead it is about making a clear and informed response about the implications to the organisation of deviating from the well thought out IT architecture plan.

So that’s my top 3 contributions to the establishment of a Design Authority capability at the University. There are several other governance aspects of course, but as I said earlier, I felt it was key to reduce ambiguous decision-making by having a clear decision framework within which to operate first and foremost.

Thank you for taking the time to read my blog and to all the colleagues from Bristol and around the sector that I’ve had the pleasure to work with in my role… I hope to work with many of you still as I stay in the HE sector in my new role! Bye for now!






Enterprise Architecture Approach in an HE Institution: 10 Practical Steps

What’s particular about doing EA in a research-intensive HE Institution like the University of Bristol?

For one thing the HE sector has some interesting dichotomies to grapple with such as the dual activities of a University like Bristol of both research and education. In a sense we are two businesses: the business of conducting research and the business of educating students, however these two activities do overlap (the researchers are often also the educators and some of the students we educate may well go on to become researchers). This has implications for our identity management and business intelligence strategies.

Another dichotomy is at the sector level at which Universities both compete and at the same time collaborate with each other. This means that we want to share considerable information about our activities (e.g. we partner up with other Universities in bids for research grants and then share research data with one another), but we may keep other information private, such as our research strategy, our IT strategy, or our student recruitment strategy and so on. Because of the nature of collaboration in our sector (plus the need to provide regular reporting to the government), some of the onus in IT terms is on data standards for interoperability and on secure, online collaboration tools to support researchers who may be geographically separated but who need to conduct research collaboratively.

There are many other complex aspects to doing EA in a University setting of course, and with a tightening of budgets and a change in the funding structure in recent years (students are now paying large sums of money for their University education) there is great pressure to rationalise IT architectures and IT support within an institution, whilst at the same time providing enough flexibility to cope with changing demands for information from the government (e.g. the REF, the KIS) and to future proof ourselves to cater for next generation research and education.

The University of Bristol is more than 100 years old and it has developed over that time with devolved budgets and much autonomy at faculty or department level. Support services at Bristol were reviewed a few years ago and a mission to centralise support, including IT, was launched. From an IT perspective it has felt rather like we have undergone a merger in an attempt to bring several, disparate organisations, all conducting their research and teaching in quite different ways into a single, harmonised organisation. So where does one start with EA in this situation?

  1. Map the existing (“As Is”) University’s Systems Applications Architecture to the Business Architecture: Get the 1000 feet view – with EA we are going to think gloStudent Lifecycle view of IT Applicationsbal even if we act local. Lifecycle diagrams (I’ve blogged about these elsewhere) are useful (an example here) in getting an initial overview of the systems architecture.
  2. Assess the how well the Applications Systems architecture “suits” the University’s Requirements Discover what are the University’s core IT systems that underpin its core processes. Do we have resilience in IT terms for supporting those core processes (note that Universities have a sense of an academic calendar, so some IT Systems are critical at certain times of the year, for example when making offers of places to students or timetabling their exams, or in some cases every few years for example in the critical period leading up to a REF submission). Do we understand the University’s required operating model  anOperating Models adapted from "Enterprise Architecture As Strategy"d do the IT Systems we have support this? For example, is it part of the University’s strategy to offer highly customised or even personalised services for students or to offer standardised services across the board? If we analyse different areas to interpret the University’s intended operating model for each then this should show  where cost savings could be made – unifying IT solutions is often cheaper, however it is essential to also understand where a diversity of technical solutions must be preserved for strategic reasons. For example, whilst it might make sense to have a unified solution for student administration, there could be good reason to allow for several different student assessment tools – because assessment might need to be carried out differently for Drama students than for Chemistry students, say. In assessing how fit for purpose the applications architecture is, there is also the significant area of information architecture which should also be reviewed – something we have only partially taken on so far at Bristol.
  3. Assess the Health of the University’s Data Architecture: if data is the life blood of the organisation then how do you measure how healthy your data architecture is? We have developed an Interface Catalogue to help understand our current data integration architecture more transparently and therefore how to improve it. In tandem we are developing an online Enterprise Data Dictionary in an attempt to centralise and maintain agreed data definitions across the organisation. We are working towards a data-centric SOA architecture in the longer term.
  4. Document Infrastructure Architecture encapsulated as a set of Services on which the Applications architecture depends – network, storage and servers. This can be shown as a layer of abstraction without worrying initially about the live, rapidly changing environment where we have deployed virtualised server solutions for example. I have documented recently about why we are doing this at Bristol – to support the analysis and the design of future improved architectures. This is the work of our Technical Architecture Virtual Team currently.
  5. Agree the level of IT innovation the University wants This is a strategic discussion to be had with senior management. Gartner Hype Cycles can be a useful way to initiate dialogue on this area as Gartner release hype cycles not only specific to the HE sector but also in relation to particular technologies. A good question is what does the technology hype cycle look like for our University? For example, will we be early or late adopters of 3D Printing, what is our strategy for MOOCs and how advanced is our Big Data strategy? Deciding where we are on new technologies is a key aspect to planning – will we need to be supporting 3D printers in every office around the University in five years time for example? And in which case what is the plan for that and what will it cost? The appetite for embracing newer technologies might vary in terms of existing skillsets and levels of resource in the IT Services department. It will also be affected by whether senior level management view IT as simply a functional thing or whether senior management are interested in what cutting edge technology can do for the University – how it could enable differentiation in an increasingly competitive sector. Our technical architecture virtual team is currently thinking about IT “horizon scanning” and documenting our thoughts using a “bricks” approach. We plan to develop the target architecture plan by examining our 2 year/5 year plans for each architectural brick, prioritising them and costing them so that we can determine a sensible roadmap for change using an holistic view of the IT architecture. We will then propose the roadmap to senior management, and aim to improve it continuously over time. Within a research intensive institution like Bristol, developing innovative IT solutions can also be part of research proposals. Similarly, the level of time that IT Services spends collaborating with the University’s research community like this should be agreed in principle with senior management at the institution.
  6. Develop the blueprint for SOA maturity and discover opportunities to partner up; Developing our University’s SOA (Service Oriented Architecture) blueprint is something we will be doing this year with the help of a product-agnostic consultant. Inputs to this process come from 1., 2. and 3. above. We will be keen to collaborate with other Universities at a similar level of SOA maturity in future and to look for opportunities for shared services going forward – with the goal of creating efficiencies.  A well aligned business architecture and a flexible data architecture across institutions will be needed to cope with this sort of endeavour, and this is no small “ask”. The UCISA EA group has plans to develop our opportunities for SOA at the sector level – the spirit of collaboration in the HE sector reaches far,  even in to the realm of IT Support services!; there are many of us undertaking Enterprise Architecture initiatives in our institutions and we are keen to continue to meet, share and learn.
  7. Develop the target (“To Be”) Technical Architecture for the University. The abstraction of required infrastructure services in 4. plus the SOA blueprint in 6 should help indicate the required To Be architecture, when appraised within the appetite-for-innovation context of 5. For example, we have several server virtualisation solutions at Bristol which we could rationalise, we have database vendor diversity which we would like to reduce without damaging corporate services and we have diverse CMS solutions and we should be initiating end of life plans for the older, legacy ones.  We also want to explore continued opportunities for Cloud solutions at the different architectural layers. We have moved email and calendaring into the cloud without too much difficulty as this was a fairly discrete part of the technical architecture to deal with. However other services have more complex interactions with each other currently and will require increased SOA maturity so that we can become agile enough to move services into the Cloud as and when we see fit. I consider the “to be” architecture to to be the reference architecture, with particular solution architectures developed to fulfill it.
  8. Encourage Senior Level Management to Engage fully with the concept of Total Cost of Ownership (TCO) Senior level management need to fully understand what this is and to be cognisant of the fact that when they approve a third party system the initial purchase is probably only the tip of the iceberg in terms of what the costs of supporting the product, upgrading it and most likely purchasing extension modules for it over time will be. For example, we are currently developing our roadmap for the launch of a second data centre and questions such as whether our products can support active-active are coming to the fore. If they are not able to then we may have to build solutions to compensate and this could add a cost to the data centre solution – a cost that we might have predicted when we first purchased the products in question. Other questions we might ask of product is whether it’s backed by only one database option and does IT services currently support that? Does the product come with a suitable web services api or will we have to augment it at our own cost? There are many such questions and it is worth looking at “non standard” products as incurring a support debt – a higher TCO – and making this transparent to senior management up front. Otherwise IT Services could find itself struggling to cope with support demands for which no budget was specifically allocated, whilst at the same time the IT architecture becomes harder to mature.
  9. Develop the IT Architecture Roadmap using all of the above. IT architecture should be a key plank of the IT strategy and the strategy should indicate what the vision of the IT plan is – the way to get from A (our current IT architecture state) to B (the future, desired state) being via a roadmap. Articulating principles (e.g. describing whether IT Services is mainly committed to building in-house solutions versus procuring third party solutions) is a key aspect of articulating the overall vision. It could be that we’re looking to create benefits such as cost reduction and so the vision will help describe where, say, moving to cloud services will reduce spending. The roadmap will likely indicate timings, dependencies and key milestones. Use of Archimate can be handy to present, visually, future architecture states along the roadmap. The 1 or 2 year projections may be clearer and more specific; longer term projections may be more generalised and allow for flexibility as the University’s strategy – as well as technology – will likely change in that time.
  10. Ensure the IT Architecture Roadmap is communicated and Service Development and Delivery Managed in Sufficiently devolved way: We are using ITIL as a framework in this respect at Bristol. Clearly there’s no point creating a roadmap if it isn’t to be fully communicated and all areas of IT Services committed to supporting it. This may mean a fresh commitment to documentation and formal processes regarding compliance with agreed IT standards and target architectural designs. Other stakeholders around the University will also need to understand the roadmap and its implications where relevant to them. The overall plan needs to be understood by senior level decision making bodies (at our University we have the Systems and Process Investment Board) such that they endorse it and accept that diverging from the blueprint could incur ‘IT debt’ as discussed above.

Aligning the Infrastructure Architecture with the Systems Application Architecture

Earlier this year we launched a new team within IT Services: the Technical Architecture Virtual Team. I lead this team, reporting via the Assistant Director of IT Services for Services Development (the team’s Sponsor) to the Senior Management Team. The team is made up of a small number of technical experts within the department, representing key aspects of the University’s IT Architecture: Networks, Storage, Resilience, Applications, Middleware, Data Centre and Infrastructure.

The key problem the team seeks to tackle is aligning the infrastructure services with the University’s application (or “business”) services. We need greater alignment than we currently have so that we can:

  • be confident that any proposed change in the infrastructure architecture (e.g. moving from an M5000 Sun Server storage solution to a Nimble storage array) will suit the requirements of the application architecture (and that we can manage the migration carefully and with predictable outcomes),
  • be confident that we can plan for any design change in the application architecture (e.g. replacement of a set of finance and HR systems with an ERP system) to be supported by the most appropriate infrastructure solution, with best use of infrastructure resources and according to agreed principles around resilience and so on,
  • troubleshoot effectively when systems failure occurs (e.g. discover an infrastructure layer failure that causes failure of a business service, by having accurate, realtime data on, for example, precisely which individual servers in a private cloud were supporting the business service at the time),
  • maintain systems more effectively (e.g. be able to take down particular database instances for maintenance, in a modular, scheduled and managed way, with minimal impact on service),
  • optimise the IT architecture holistically on an ongoing basis – i.e. create new architectural designs (taking advantage of cloud storage solutions or identifying opportunities for shared services at the sector level for example) based on a clear definition of the requirements that the IT applications architecture has on infrastructure.

We can view the relationship between these layers conceptually as in this diagram (click to enlarge):

techarchiVTdiaIt is important to articulate the set of conceptual Applications (or “business”) services we need to support as well as the set of physical applications we have deployed in order to realise those conceptual services. Similarly, it is important to articulate the set of conceptual Infrastructure services required to support the applications layer, as well as the physical implementation of them. This is so we can assess the design of the entire IT architecture according to an holistic, requirements-driven approach rather than being driven purely by technology favouritism for example. The team is currently documenting (using the Archimate modelling standard) our architectures according to agreed conventions that work best for us.

We will need to maintain this documentation on an ongoing basis for two key purposes: i) Troubleshooting: we are currently investigating how we can export realtime live configuration information from our virtualised infrastructure layers into this common visual format (and how to sync it with our Helpdesk application – we use a product called Topdesk – to assist IT first line and second line support), ii) Design: by being able to combine various Archimate models of our architecture into more holistic, architectural overviews, we aim to be able to focus on known weaknesses in our architecture and plan target architectures and the roadmap for change in an informed, aligned way.

At the same time we are using a Bricks approach to evaluate individual Architecture Building Blocks (see for example https://enterprisearchitecture.nih.gov/Pages/WhatIsBrick.aspx ) in detail in order to evaluate our 2 year/5 year strategy for different areas of the architecture. The Archimate work then helps us to evaluate options for each Brick in relation to other Bricks, rather than in isolation. Through this we aim to avoid the somewhat silo’d ways of work that we have found can occur rather inevitably when we have separate Applications and Infrastructure teams.

To put this piece of Enterprise Architecture work in context, see the diagram below,

Layers in the architecture stack

Layers in the architecture stack


This diagram illustrates the positioning of the Technical Architecture Virtual Team’s work in relation to other Enterprise Architecture activities that I’ve written about elsewhere in this blog. It is also to show that all our work is done within the context of the organisation’s strategy and an understanding of the business architecture that IT Services needs to support at the University.

Spaghetti Grows in System Architectures – not an April Fools’ Day joke

A replay on breakfast TV this morning of the well known Panarama hoax (1st April 1957) reminded me of the mission we’re on at Bristol to “turn spaghetti into lasagne”. This mission is number 7 on the JISC 10 pointer list for improving organisational efficiency: spaghetti refers to the proliferation of point-to-point (tightly coupled) integrations between our University’s many IT Systems and lasagne refers to the nicely layered systems and data architecture we’d like to achieve (see elsewhere in this blog).

However, transforming our data architecture overnight is not achievable, instead we’ve developed a roadmap spanning several years in which reform of our data architecture fits into the wider contexts of both Master Data Management and Service Oriented Architecture.

In November last year our senior project approval group (now known as the Systems and Process Investment Board) agreed to resource a one year Master Data Integration Project. We will return to the same board early in 2015 with a follow on business case, but this year’s project is concerned with delivering the following foundation work.

  • The establishment of Master Data governance and process at the University (the creation of a Master Data Governance Board and the appointment of Data Managers and Data Stewards as part of existing roles throughout the University – responsible for data quality in their domains and for following newly defined Change of Data processes),
  • Completion of documentation of all the spaghetti (aka the integrations between our IT systems) in our Interface Catalogue, and also the documentation of our Master Data Entities (and their attributes and relationships) in an online Enterprise Data Dictionary (developed in-house),
  • Development of a SOA blueprint for the University, including our target data architecture. This with the help of a SOA consultant and to inform the follow on business case for SOA at Bristol, which we hope the University will fund from 2015.

We are undertaking this work with the following resources: Enterprise Architect (me) at 0.3FTE for a year, a Business Analyst (trained in Enterprise and Solutions Architecture) at 0.5FTE, a Project Manager at 0.3FTE, IT development time (both for developing the Enterprise Data Dictionary and for helping to populate the Interface Catalogue with information) and approximately £60K of consultancy.

We had some very useful consultancy earlier this year from Intelligent Business Strategies: several insightful conversations with MD, Mike Ferguson, and a week with Henrik Soerensen. From this we were able to draw up a Master Data Governance structure tailored to our organisation, which we are now trialling. Master Data Governance Structure v3
This work also helped us to consider key issues around governance processes and how to capture key information – such as including business rules around data – in the online data dictionary.
Later this year we will be working for an extended period with an independent SOA consultant based in the South West, Ben Wilcock of SOA growers. We have already worked with Ben in small amounts this year and I am very much looking forward to collaborating with him further to develop our target data architecture (most likely a set of master data services, supporting basic CRUD operations) within the context of a SOA blueprint for our enterprise architecture.

SOA Knowledge Exchange 2013 – a beacon of light or the blind leading the blind?

Some time ago, at a UCISA Enterprise Architecture event, I discussed whether we needed a workshop for Higher Education (HE) sector technical architects to get together and share knowledge of Service Oriented Architecture (SOA). Whilst enthusiastic, colleagues expressed concern at the time that possibly the sector was too immature in this area for any individuals within it to be able to contribute to a worthwhile discussion. However I was not keen to host a vendor-led event as sometimes these can introduce an unhelpful level of product bias and not necessarily highlight the pitfalls to avoid when pursing a SOA roadmap in an HE institution.
Photo of SOA knowledge exchange

Since that time I bumped into several colleagues from other HE institutions at various events and began to realise that there was a lot more SOA activity going on around the sector than I’d maybe thought. The people I spoke to were very encouraging about the possibility of a SOA knowledge exchange and some obvious speakers began to emerge. So I organised the SOA Knowledge Exchange at Bristol. I only had 40 places and they were filled within five days of advertising by UCISA and the JISC, clearly demonstrating that this is a hot topic for many of us. The workshop took place last month and here is a summary report of what went on.

I presented the first session, briefly describing our SOA roadmap at Bristol (much of which is detailed elsewhere in this blog) and explaining how I managed to reach the ears of our senior level decision-making body last year. I talked about how, as Bristol’s Enterprise Architect, I am able to take an holistic, enterprise-wide approach, and to articulate to non-technical, key strategy-makers, the problems that our complicated, point-to-point architecture cause us. Taking a SOA appproach should reduce the costs of maintaining our data architecture, reduce data redundancy and duplication as well as save us the embarrassment of system breakages caused by data changes in a master system rippling in an uncontrolled way through the systems architecture and causing data failure. However, implementing a SOA solution will require investment and at Bristol we see the road to SOA maturity taking years not months. We are beginning with a year-long foundation stage. I talked about this and the work we are doing to document our current integration architecture: the interface catalogue and the enterprise data dictionary (I also talk about these elsewhere in this blog).

Next up was Simon Bleasdale from Cardiff. He talked about how Cardiff is taking an iterative approach to enterprise application integration. Cardiff are building in-house skills through the creation of data integration and application integration teams and their approach is project-focussed. Simon described their range of data services so far developed using Restful Web Services via a combination of Grails and Mule. Having previously invested in IBM software, Cardiff are now conducting a review of this and plan to explore whether Talend or WSO2 would be a good future option for their University. They are committed to taking an agile, open source, simple and cost effective approach to data integration.

Allister Homes from Lincoln followed Simon to describe their current technical architecture in terms of their early work on master data management (MDM) and deploying Enterprise Service Bus (ESB) technology using Microsoft Biztalk. He discussed how a combination of MDM and ESB is enabling Lincoln to distribute data via their Sharepoint and Customer Relationship Managment systems to a range of application services across the institution. He also read out a highly entertaining and very apt paragraph that Lincoln put in their SOA business case for senior management, likening the current experience of their digital communities to that of dwelling in a primitive, unintegrated town settlement and describing the vision of the future as analogous to a thriving ‘metropolis’ with properly integrated services.

Martin Figg and Russel Gibson from Imperial gave a tour of the issues currently faced at their University and progress to date. Imperial have implemented SOA solutions for sharing HR organisation hierarchy data and research data with other IT systems and are now working on building data security into the architecture by integrating with their Authentication As A Service (AAAS) solution. They have worked on building SOA infrastructure in a standardised way to support generic error handling, reporting and resubmission. They are continuing to try and build strong SOA governance to stop the proliferation of point-to-point interfaces between systems and to promote the standardised reuse of data services to replace them. They have at least 6 skilled staff working in this area, with other staff in various supporting roles, including a SOA consultant to help answer particular queries. Some really interesting questions were raised in this presentation about the implementation of Enterprise Business Objects (EBO’s) and Enterprise Business Services (EBS’s), and the hard design decisions they’ve been facing in configuring the generic Oracle Business Suite to work best for their HE-specific purposes.

Cal Racey brought us up to lunchtime with a presentation on Newcastle’s SOA and Institutional Data Management work to date. He described Newcastle’s institutional data feed service (IDFS) which delivers data to over 30 of their systems via 70 different feeds. This came out of the JISC funded IDMAPs project and is currently supported by the equivalent of two full-time staff. They are using Talend. Cal advocates a light-touch data dictionary approach, avoiding going down rat holes with data structuring and modelling and also warns against the distractions of endless battles around whether to choose a SOAP versus REST approach, XML versus JSON data formats and so on. He also recommends continuously identifying where master data is held and what new master data an application produces. Cal described the importance of measuring the data problems right at the start so that it is possible to evidence the success of SOA-based solutions and thus build the case for continued SOA investment.

After lunch Jim Phelps joined us virtually from the States where it was 8.30am his time! He is Enterprise Architect at UK-Madison University and also chair of ITANA. He gave us a presentation that first described the Push Notification architecture they have introduced at Madison and then went on to talk about ITANA’s work. Some interesting stats on technologies being deployed by US Universities in a recent survey: the majority are using WSO2’s open source stack, probably because of the fairly large developer skillset in this area over in the States. The survey picked up just one Oracle SOA Suite institution, one IBM institution and some Fuse source. For more information on Jim’s excellent work, see ITANA’s SOA API Wiki: http://goo.gl/evbpk5 and the W-Madison AIA Wiki: http://goo.gl/xblJS4

Open Group Potential Relationship to UCISA and ITANA Slide
David Rose opened our final speaker session, joined remotely by Chris Harding, Director For Interoperability at the Open Group. The pair gave a very useful, wider context to the whole day, talking about completed and current projects being undertaken by the SOA Work Group. Current projects include SOA for Business Technology, SOA Reference Architecture V2 and SOA Certification. Resources from completed projects can be found online (http://www.opengroup.org/). They also talked about Cloud readiness and the relevance of SOA to this. They went on to discuss whether the HE Sector could create a Reference Architecture that would evolve over time and enable both a language for conversations (and leveraging power) with suppliers as well as help individual universities to develop their SOA architectures. They finished with a thoughtful slide on the potential relationship between UCISA (with its Enterprise Architecture group), ITANA and the Open Group (shown above).


And the conclusion? This event didn’t feel like the blind leading the blind by any means and the feedback has been excellent (a selection of which I’ve recorded here). If we can keep up the momentum I think something very constructive could come from this initial effort to collaborate. Hopefully we can work on a reference architecture for our sector, share emerging SOA design patterns and perhaps we can begin to influence key vendors by clearly specifying our common data architecture requirements…  Many thanks again to all the speakers and for the great input from those who joined us as participants. It was a very enjoyable day!



SOA Knowledge Exchange, 1 day event, Bristol, 27th September 2013

About the event

******** SORRY, THIS EVENT IS NOW FULL  **********

We are holding a workshop in Bristol this month for Enterprise Architects, Technical Architects or those in similar roles at Universities in the UK. The general aim of the event is to gather and discuss the SOA roadmap strategies we are developing within our institutions, to share  success stories and to alert others to potential pitfalls along the SOA adoption route. This is not an introduction to SOA, but a chance to share real experiences of adopting a SOA approach within an HE institution. There will be presentations and discussions throughout the day, mostly from colleagues working on SOA initiatives in a UK University setting, but also with contributions from Jim Phelps of the ITANA group (serving HE institutions in the USA) and Chris Harding of The Open Group (dedicated to developing global IT Standards).

If you would like to attend, please compete the registration form at: https://www.survey.bris.ac.uk/it-services/soa_workshop_registration


The Bergerac Room, 8-10 Berkeley Square, Bristol: Map and directions


Subject to change

10.00 Refreshments

10.30 Welcome and introduction to the day

10.40 Talk 1 The business case for SOA at the University of Bristol: developing the roadmap and getting buy-in from senior management, Nikki Rogers, Enterprise Architect, University of Bristol

10:55 Q&A

11:10 Talk 2: SOA Strategy at Cardiff University: taking an open source, iterative approach, Simon Bleasdale, Senior Integration Developer and Myles Randall, Integration Developer, Cardiff University

11.25 Q&A

11.40 Talk 3: The early stages of implementing an ESB and deploying a Master Data Management strategy at Lincoln, Allister Homes, Senior Systems Architect, University of Lincoln

11.55 Q&A

12.10pm Talk 4: Institutional Data Management at Newcastle, developing and governing data services, Cal Racey, Systems Architecture Manager, Newcastle University

12.25: Q&A

12:40: Lunch & opportunity for breakout discussions

1.30pm: Talk 5: Building Student Services at Imperial – making SOA a Reality, Martin Figg, Software Development Manager and Russell Gibson, Technical Project Services Manager, from Imperial College London

1.45: Q&A

2pm Talk 6: Chris Harding and David Rose from the Open Group: Emerging Open Standards in SOA and the new ‘Open Platform 3.0’ area.

2.20 Q&A

2.30pm Talk 7: Update from Jim Phelps, joining us remotely from the US, on the ITANA group’s SOA API initiative

2.45 Q&A

3pm: Coffee and Tea, Open discussion, Are there steps we can take at HE Sector level for example to develop common data standards and encourage IT Suppliers to comply with SOA approaches that we will require? (noting the Higher Education Data & Information Improvement Programme (HEDIIP which is developing a Higher Education Data Language)  and its potential relationship to the growing number of SOA initiatives in UK Universities).

4pm close

Looking back on Year 2

As a follow on to my blog post that reflected on year 1 of EA at Bristol (https://enterprisearchitect.blogs.bristol.ac.uk/2012/04/17/looking-back-on-the-first-year-of-my-ea-role-at-bristol/), here’s a summary of the top three key things I covered in year 2:


  • Embedding EA within the Governance Structure at Bristol. One problem I wanted to tackle was that of how to ensure that EA is a formal consideration whenever new IT-related projects are proposed at Bristol, whether that should involve in-house IT development or the procurement of third-party systems. Until this second year, EA was being acknowledged as valuable at Bristol, but there was no place in the project approval process at which the Enterprise Architect could make a clear recommendation based on an appraisal of some new project’s fit with Bristol’s business architecture, information architecture, and technical architecture. When I started in my role I was invited to be part of a group called the System Programme Managers Group (SPMG) which met monthly to review current and proposed projects, and which was comprised of managers leading various University programmes (IT, Education, Planning etc). I ran a workshop with this group entitled “Capacity for Change” and I talked about how we should review potential projects in terms of what priority they should have in the already packed project portfolio that the University was funding. This, in combination with some senior level initiatives culminated in the renaming of the SPMG to the “Portfolio Management Group” (PMG). Its new remit is now to support the higher level Portfolio Executive group more directly by making recommendations on any new project’s fit with the strategic priorities of the University. We have aligned the strategic priorities with the benefits mapping work I mention above, and each new business case is now submitted with a PMG recommendation, describing whether we have existing resources to support the proposed project, its priority level, and a statement of its fit with the enterprise architecture – hurray! So, this is not to say that Enterprise Architecture considerations now dictate decision-making, just that it has become formally recognised as part of the project approval governance process; a positive step forward.


  • Developing the SOA roadmap for Bristol. After the workshop for the Portfolio Executive, mentioned above, I submitted a Stage 0 business case (entitled Master Data Integration Framework) which was approved. The next step is to develop the Stage 1 business case, for presentation to the Portfolio Executive this Autumn, and for this I am developing the roadmap, with an indication of costs and benefits along the way, in consultation with others. For this work I have maintained a completely separate blog, see: http://coredataintegration.isys.bris.ac.uk/category/final-project-report/ for more information.  The initial roadmap I produced for Bristol can be summarised as follows,

Step 1. Make the initial business case for Service Oriented Architecture (SOA) (even if it is not explicitly called ‘SOA’ at this stage) to achieve senior level understanding of the importance of a good data architecture blogbusinesscaseand why investment in this apparently invisible middleware layer will bring benefit to the institution.


Step 2. Complete the data dictionary and the interface catalogue to at least 80% for all master data system integrations.


See my separate blog for why this is important: http://coredataintegration.isys.bris.ac.uk/2013/06/16/important-documentation-for-soa-the-interface-catalogue-and-data-dictionary/. The benefits at this point are:

  • Faster (cheaper) implementation of new IT systems (quicker to assess impact of a system change on other systems)
  • Faster (cheaper), more reliable Business Intelligence/Operational report production

Step 3: Introduce data governance.


By introducing data stewards who will be responsible for ensuring that no master data system changes in data structure ‘go live’ before a check has been done against the interface catalogue and the data dictionary (through a formally defined process), then the benefit we introduce at this point is:

  • A more sustainable IT architecture and a higher guarantee that business processes will function without disruption. This is due to data being managed as an institutional asset and ongoing changes to master data structures being managed in a controlled way as they propagate throughout our IT system landscape (avoiding system and process failure and/or damage to our organization’s reputation)

Step 4: Analyse the interface catalogue and develop a logical data model from the data dictionary. Our interface catalogue already reveals the high percentage of point to point interfaces that replicate very similar data synchronisation tasks between IT systems. We can at this point make recommendations regarding how we can reduce this percentage dramatically and refine the integrations into a minimal number of key data services to share master data across the systems that require it. This is where we relate SOA planning to the business architecture of the University, with an appreciation of how data is used across the student and research lifecycles. This step therefore requires business analysis as well as technical systems analysis.

The benefit of doing it is that we are able to describe more specifically the cost-savings and agility that the University stands to gain if it invests in consistently applied SOA technology. We can note that Gartner’s research demonstrated that 92% of the Total Cost of Ownership – http://www.gartner.com/it-glossary/total-cost-of-ownership-tco/ – of an average IT application, based on 15 years of use, is incurred after the implementation project has finished. A significant part of those costs will be concerned with maintaining the application’s seamless integration within the organisation’s application architecture, so by simplifying this we should save costs over time.

Naturally, the other purpose of doing this step is so that we can analyse the “As Is” architecture and design the “To Be” architecture – a precursor for step 5.

Step 5: Adopt a SOA technology solution, train IT developers to use it.


The analysis from Step 4 should have provided us with the basis from which to design the data integration architecture we need. The ‘blue ring’ diagram is not a formal systems architecture diagram by any means, it is simply used to roughly illustrate the concept of a master data systems architecture through which we decouple systems requiring master data from connecting direct to the master data systems themselves (avoiding the proliferation of point to point systems integrations that we suffer from currently). We should have learned through step 4 where we need real-time data, near real-time data and where real-time data is not essential. We can therefore look for the most key processes that we need to support via ESB technology and where, in other places, nightly refreshes of data will ‘do’. We can prioritise which master data services we want to implement in which order, using our favoured SOA technology (whether we go the opensource or commercial route). We can decide if we want to keep and improve our operational datastore and we can determine carefully whether we deliberately want to maintain some point-to-point, database-database integrations (i.e. analyse where in fact, web services, say, will not offer us greater benefit if we acknowledge the trade off between maintaining existent Oracle database to Oracle database connections versus the possible reduction in performance if we deploy data service intermediaries).  We need a dedicated team of trained developers to monitor and operate the data integration architecture according to standards we define (SOA governance), to understand how to optimise data caching within it and so on. At this point the benefits we expect to achieve are:

  • Cheaper, less complex to maintain, standardized master data integration architecture (blue ring).
  • Higher guarantees of data security & compliance (data services to have Service Level Agreements and data security to be built in from the outset)
  • Agility to respond quickly to Cloud and Shared Services opportunities
  • Quicker to respond to changes in external reporting data structures (KIS, HESA, REF etc.)

Step 6: Train our IT developers and Business Analysts to work together using a standard set of skills and tools based around BPEL, BPM etc.

At this point we expect to reach the level of SOA sophistication where we have the ability to orchestrate and optimize end to end processes that share and manipulate data, creating efficiencies and business agility across the institution.
Suffice it to say, we are some years off Step 6! We are currently working on steps 2-4  above. This Autumn I am running a knowledge exchange workshop for HEI’s working on SOA roadmaps to gather and compare and contrast their plans, successes and lessons learned to date. If you are interested in coming, please contact me directly.

What are the benefits of Benefits Maps?

I have been to a couple of JISC-sponsored events previously at which benefits maps were presented. It was claimed that they can be useful in helping a University to think about return on investment (ROI) in terms of strategic planning. So I suggested benefits maps as being of potential use when I ran an Enterprise Architecture-focussed workshop with the University’s decision-making body (the Portfolio Executive) in a workshop earlier this year. The group agreed to trial the approach and we have been doing quite a lot of work with this tool since then. I’ll give a brief account of this here.

Photo of workshop participants working on a benefits map

Members of the University’s Technology Enhanced Learning Advisory Network hard at work on a Benefits Map for the Education area

Strategic Maturity

I introduced benefits maps as a tool to assist us as we continue on the journey towards the desirable position of strategic maturity. I am sure there are many definitions of what strategic maturity actually is. My version goes something like this: Strategic maturity is a good position for the University to aim for, a position in which the University’s complex map of initiatives, programmes and projects (the agents of change) are aligned to common strategic aims and objectives. A position in which planned changes (in business processes, culture, IT systems, buildings, governance and so on) are made with a clear understanding of the benefits that they are supposed to deliver, and with the capability to communicate intention holistically and to perform benefits realisation (i.e. to know whether benefits and return on investment are truly being delivered, and if not, then what to do about it). At levels of senior governance there should be clarity and consensus around high level strategic objectives and an ability to state them concisely. Ultimately, to achieve this level of strategic maturity should not only mean that the University performs more efficiently and confidently, but also that it is also able to respond in an agile way to the need for ‘business’ change at any time and thus be very successful at maintaining a competitive edge in all key areas.

Strategy on one page

I discussed with the portfolio executive the nature of the strategic framework within which the University of Bristol operates and whether “Strategy on one page” would be an achievable goal. We have lots of excellent, lengthy strategy documents at the University – the overarching one being the Vision and Strategy document at http://www.bris.ac.uk/university/vision/ – but sometimes I have found it hard to interlink them easily in such a way that it is easy for me, as Enterprise Architect, to help align IT Services planning with strategy (see Section 1 of the JISC Strategic ICT Toolkit for a nice articulation of why organisations seek this alignment: http://www.nottingham.ac.uk/gradschool/sict/toolkit/importance). We haven’t decided on a format for strategy-on-one-page yet, but there are some interesting examples around, see for example the Sorbonne University sample at http://www.sorbonne-university.com/about-sorbonne-university/challenges/strategic-framework. However, we have been trying the benefits maps approach to elucidating the logic of our strategy and to help pave the way towards benefits realisation.

Modelling with benefits maps

Here’s an example of a Council’s benefits map (click to enlarge) designed to analyse the logic behind plans to regenerate the local economy and improve the city’s image:

Example City Council Benefits Map

There is a useful guide to developing and interpreting these sorts of benefits maps here: http://pspmawiki.londoncouncils.gov.uk/index.php/Benefits_Identification_And_Mapping

There are different approaches to modelling with benefits maps, but this one, with the five key concepts of project/output, business change, intermediate benefit, end benefit and strategic objective is the one I’m currently finding the most useful.

We are now in the process of producing three Benefits Maps: one for the Research area, one for the Education area and one for Support Areas that includes Administration, IT, and Finance. Our initial benefits maps demonstrate a mapping of all our currently planned project outputs and enablers to strategic objectives on a single page per benefits map. This gives us a part-visual impression of the current – “As Is” – state of our strategically focussed activity, based on the current projects portfolio. The next step considers gaps and opportunities and whether we can produce a “To Be” benefits map for each area that demonstrates what the project portfolio would ideally look like in order to fully meet prioritised strategic objectives. Using this methodology may inspire a reprioritisation of projects, or even a coordinated call for new projects, in order to help us to roadmap the way to the envisaged future position. Although we are producing three separate benefits maps for the different areas, we hope to do some interesting work in bringing them together into a simple, unified view of the University’s strategic priorities and supporting activities.

Benefits Realisation

Meanwhile, our Director of Planning is undertaking some work developing comprehensive benefits realisation plans that map to the end benefits in our benefits maps. These describe how we will measure the benefits monitored on an on-going basis as the University continues to strive to deliver on vision and strategy. She is using a balance score card approach. And where “problems” might show up over time (i.e. whereever the measures indicate that in fact we’re not seeing the progress we’d predicted in some area) we hope to be able to use the benefits maps to inform discussions of why benefits are not being realised and to help determine whether we need to action plan our way back on course or whether to leave things as they are (or perhaps even where to cease projects that are not delivering the intended benefits).

To explain, take a look at this example excerpt from an education-focussed benefits map:On the left hand side we list the lecture capture project which we show as delivering the interim benefit “more learning resources online”, which in turn leads to the interim benefit “Greater quality and consistency of the student learning experience”. This feeds into an end benefit (that we are actively measuring via a number of mechanisms articulated outside of the benefits map): “More Positive Student Feedback”. Notice as well the business change shown as feeding to the interim benefit about more learning resources online: “Culture change to encourage academics to make more use of online opportunities for enriching teaching and learning”.If we should find out that over time we are not getting positive student feedback about the learning resources available to them online then we can use the benefits map as a discussion tool in trying to figure out why. For example we might find that the technology deployed by the lecture capture project was not fit for purpose. Or we might find that in fact it was perfect for use but instead academic culture had not adapted sufficiently (some academics may think that by recording all their lectures they’ll actually put off large numbers of students from bothering to turn up to the physical lecture!). Naturally we could take action to improve the situation given either scenario. Of course this is not an exact science and several other outputs are feeding into there being more learning resources online. But this benefits maps approach does give us more focus to our discussions.



Following my (albeit limited) experience in using benefit maps to help groups of people to articulate strategy I would recommend the following,

  • When brainstorming benefits maps from scratch, a blank piece of paper for some is intimidating and can produce a kind of writer’s block. When we split large groups into smaller groups of four or five people we found it very useful to have a facilitator working with each group – someone who is already familiar with benefits maps and good at interpreting and representing the verbal discussion in this format,
  • We have tried flip chart paper and ‘stickies’ to get small groups working with benefits maps, but using an interactive whiteboard to do this would be more preferable by far,
  • Introduce benefits maps by way of example (such as the council one above) and be aware that although the five concepts used seem really quick and easy to understand, some people will confuse ‘output/enabler’ with ‘benefit’ (e.g. someone I worked with was convinced that ‘online exams’ should be shown as an end benefit until I explained that it was an output from which benefits could be derived),
  • Encourage concise language – restrict benefits maps to one page (you can always extend the description of concepts in the benefits maps by cross-referencing them to accompanying papers, say).
  • Review benefits maps, extrapolate critical pathways in them for further discussion and keep them regularly updated.

The benefits

And what of the benefits of benefits maps?!

  • Benefits maps seem to be helping us to build consensus around strategy – they force precise thinking, concise terminology and logical articulations, causing groups of people to engage, focus and agree,
  • A comparison of “As Is” and “To Be” benefits maps can be used to develop project portfolio roadmaps,
  • Benefits maps can be aligned with benefits realisation plans and give traceability from project deliverables/outputs through to benefits through to strategic objectives. This helps the “where did it go wrong?” discussion should that arise and thus help inform action planning where necessary,
  • Benefits maps can be used to highlight key points for discussion – for example, a benefit with no project leading to it could mean that we are missing critical projects needed to bring about strategic change, or a project/benefit leading to no end benefits might mean that the organisation is investing in a project that is not delivering strategic value,
  • Ultimately, the development of benefits maps can be useful for aligning IT Services planning with organisational strategy (and presumably for aligning planning in any other part of the organisation). In fact there can be a very direct knock on effect – that fact that the organisation wants to measure end benefits probably means that whole sets of data recorded across the University (from student feedback to the numbers of research outputs per research group, year on year) must be in good enough shape to support business intelligence! Which relates to my earlier blog post on data integration…

The value of an Interface Catalogue

See also my more recent blog post http://coredataintegration.isys.bris.ac.uk/2013/06/16/important-documentation-for-soa-the-interface-catalogue-and-data-dictionary/

Part of Enterprise Architecture activity involves examining the “As Is” in terms of the organisation’s systems architecture and developing a vision and specification of the “To Be” systems architecture. Many systems are integrated with each other in terms of data – data is ideally stored once and reused many times. In an HE context this could involve reusing student records information in a Virtual Learning Environment or surfacing research publications stored in a research information system on the University’s public website, and so on. So, the enterprise integration architecture is a key puzzle to unravel and to improve over time.

In your organization you may be lucky enough to have had a central, searchable database in which all systems interfaces have been documented to a useful level of detail since the beginning of time. If so, I envy you, because at the University of Bristol we are not so fortunate – yet. An important part of my exploration of the “As Is” for my organization this last year has been to attempt to map out our complex systems architecture and to understand how mature our integration architecture is. Do we need to design and develop a Service Orientated Architecture, and if so are we ‘ready’ for it in terms of the maturity of our existing integration architecture and our clarity regarding future requirements? The problem has been a lack of documentation to date: many times when integrations between systems have been created they are done so according to the particular developer’s preference at that time, and not documented either in terms of how the integration was implemented (perhaps via ETL, using AJAX, using a Web Service, or merely via some perl scripts) or with respect to the rational for that method of integration (for a brief, useful analysis of options for integration that I believe is still relevant several years on, please see MIT’s chart at  http://tinyurl.com/MITIntegrationOptions). Information about these integrations remains in people’s heads. And often these people leave (well, actually, a lot of people stay as the University is a popular place to work!, but you see my point).

At our University there are a lot of point to point integrations between systems. Far more than we would like. And so, to understand the extent of the problem I have introduced an Interface Catalog which developers across teams are now undertaking to fill with information. There were some exchanges of information about using interface catalogs on the ITANA email list earlier this year. I used this and other resources to develop a proforma format for our University of Bristol interface catalog. I started this off in an internal wiki as I’ve found this a good forum for relatively informal, collaborative development of standards in the first instance. My idea was not to impose a format, but to build consensus around what we really need to record and how consistent developers need to be in constructing the information in each record. In the early days it looked something like this (this doesn’t show the full set of column headings):

screen grab of an early version of our interface catalog

After several iterations and use by developers, consensus around the terminology we wished to use and the level of detail required was reached. The team has now implemented the catalog in an Oracle database, mainly so that we can easily control vocabulary (avoiding different developers describing the same type of interface or class of data object etc in different ways) and also so that we can more easily search the catalog.

There is a good level of buy-in to using this catalog which I am very pleased with. It is time-consuming to fill in information retrospectively, but developers report that it is quick and easy to record information about interfaces as they go along.

Some were initially unsure about the value of the catalog as it is clear that the database will run to many hundreds of rows, if not thousands, pretty quickly. However this catalog is not for browsing, it is a dataset for analysis and querying and there are several expected benefits that we hope to reap when we reach a critical mass of data:

  • When a system is up for replacement we will be able to query the catalog to see how many interfaces there are to that system and thus assess the work involved in integrating similarly (or indeed in new ways) with the incoming system.
  • When developers leave, they won’t have taken essential knowledge with them in their heads – centrally-held documentation is key!
  • The make up of our current integration architecture will become clearer and we will be able to produce a coherent analysis of the extent to which we are depending on point-to-point integrations between systems (which are hard to sustain over time and reduce the agility of our overall architecture), and also where different integrations are repeating similar tasks in different ways (for example, transferring/transforming the same data objects). In the former case we are able to make the case more clearly for a future, more mature integration architecture, and in the latter case we can look to offer core API’s offering commonly required functionality in a standard way (the reuse advantage).
  • The catalog will help us to introduce a more formal, standardised and thus consistent approach to the way we integrate new technical systems – if we choose ETL, say, then the developer should record the justification for that solution, if ESB was discounted, we can record why, and so on.

In the interests of standardisation, we are linking the names used to describe systems to our corporate services catalog and we are starting to join and interlink a Data Dictionary with the interface catalog (i.e. to help specify more precisely the data objects that are transferred and transformed between systems for every integration developed).

I am able to view and search the database using SQL Developer as my client software, and a simple SQL query reveals data like this:

If anyone wishes to know more about the full set of detail we’re capturing per interface, feel free to get in touch. I would also be interested in others’ experiences with interface catalogs.





Looking back on the first year of my EA role at Bristol

Presentation to the JISC Transformations Programme

A couple of weeks ago I presented some thoughts on what I’ve learned through doing Enterprise Architecture in my new role at the University of Bristol this last year. The event was the JISC “Doing Enterprise Architecture workshop” and the slides to my presentation can be found here: Slide Presentation on The First Year of EA at Bristol.Nikki presenting at the JISC EA workshop 2012

The common question: How did we get started on EA at the University?

Several people have asked me how we got the Enterprise Architecture post created here at Bristol – how did we convince senior management that it was needed and that they should invest funds towards formal adoption of the EA approach? I suppose the lead up to this happening at the University goes back nearly a decade, when several initiatives started to come together, including formal adoption of the Prince2 methodology for managing projects, formal agreement on a clearly defined business case process route (supported with expertise from a growing team of in-house business analysts) and then the creation of a senior management governance group, to evaluate whether to approve business cases or not. The latter is called the Portfolio Executive, a more recent and certainly vital piece of the governance landscape at Bristol. Given the financial pressures on the HE sector at large and the increased competitive focus of our institution, the appetite for adopting standard approaches – proven elsewhere to help create efficiencies and achieve strategic success – has grown significantly. In the last few years Bristol has undertaken a large programme of work called Support Services Review which has helped the University to evaluate its strengths and weaknesses across support functions including IT Services.  You can read more about this together with details of our experience participating in the JISC EA Foundation Programme in Luke Taylor’s, 2009 blog entries. Part of the review resulted in the creation of a new Enterprise Architect role within IT Services.

So, essentially, the time was ripe for making the case that Enterprise Architecture is needed to help increase the organisation’s ICT maturity, to avoid money wastage where there is investment in IT systems and to improve the likelihood that the organisation uses its IT capability effectively and to increase its competitive advantage. I would emphasise to others looking to establish EA roles that it is of great value to point out where things have gone wrong in your institution to date and how using an EA approach could have ensured they didn’t. For example, if your problems have included difficulties in retaining IT developers, that’s probably not an EA issue.  However if, for example, your problems include failed/poor IT projects due to a lack of joined up thinking across the organisation, or over-expenditure as a result of a lack of precision in defining the roadmap from the “as is” state to the “to be” then you can describe how EA could have helped, and so on. One of the things I started collating early on was “lessons learned” from conversations with a wide range of people. If noone owns the EA problem specifically then it may be that the EA approach is not governed or conducted successfully within an organisation.

We now find ourselves at the end of year one of my new role as Bristol’s EA. We’ve made some good EA progress already (more on that in a minute) and are now ready to define and propose general adoption of our tailored EA methodology. Whereas one might find whole teams of architects in industry sectors here we have just one post, so it is essential at this point to integrate how EA operates in a standardised way alongside other frameworks and standards that we have adapted for use at Bristol, such as ITIL (adopted throughout IT Services), Prince2 for project management (adopted by our Strategic Projects Office), and a standard IT project development methodology (for our now centralised IT developer teams).

Some key things I tackled in Year One

I approached the first year by trying to balance my efforts across strategic activity (engaging senior management, understanding and mapping strategic agendas across Education and Research and linking them to our ICT strategy) and tactical activity (applying EA to specific University projects). It takes longer than you might first imagine to understand how a large, complex organisation like the University of Bristol functions, both strategically and in terms of its business processes and IT systems.

Strategic Successes

I undertook an unofficial tour of the University, ensuring that I met with the director and senior team of our Research, Enterprise and Development department, engaged with key committees and the senior personnel on the Education side, and also met with senior people not in IT Services, but very closely related, for example the Head of our Strategic Projects Office. This sort of senior engagement is only possible (given how busy people are) either if the EA is in a very senior position themselves, or (as in my case) the organisation has endorsed EA at a senior level, and the director of IT, say, opens doors for you. The engagement is not one-off, it is ongoing.

My mission was to gain trust across the organisation and to show why and how EA activity is to help a two-way dialogue between IT Services and various parts of the organisation.  Sometimes the discussions stimulate more than an understanding of EA and lead on to realisations that change may be necessary, perhaps to cause orthogonal business processes to segue at key points, or to solve gaps in existing processes, for example. On some occasions there has been only a short space of time to convey complex ideas to people. I have found that non IT people who are nevertheless highly intelligent(!) engage very well with reasonably complex ideas that are introduced simply, for example through lifecycle diagrams or the simple ICT Maturity graph below. I was extremely pleased when I was able recently to describe Enterprise Architecture to our Portfolio Executive in just 10 minutes. The 10 minute discussion that then ensued resulted in me being invited back to run a two hour workshop with the same group so that they may explore in depth how EA may benefit the strategic decision making process. I look forward to blogging more about that at some point.

Tactical Successes

It has not been my goal to document every system and every process with Archimate models. That would be impossible time-wise, and would create a repository of resources that would (at this stage in our use of EA) very quickly go out of date. However, applying EA to specific projects has been very helpful firstly in helping me learn the Archimate language (see other posts in this blog on this), but also in helping us to understand the advantage of applying EA to specific areas so as to build up a picture of more general and cross-cutting concerns.

It also helps to justify the role of EA by offering examples of practical achievements. For example, on a project that launched a new online tool for departments to use in curriculum design I was able to point out via Archimate models that in the “to be”  picture there was ambiguity about who was performing certain business tasks (according to the business process layers) and a technical problem regarding the reconciliation of course codes generated by the tool when uploading changed course information to SITS (our student records system). The separation of layers in Archimate models really helps people to separate out concerns and issues when discussing problems on projects. It helps avoid the possibility that the IT department blames the business analysts for project failures or vice versa (or that they blame the end-users for not using the new service correctly!) because the combined view of people, business processes and systems and infrastructure, helps pinpoint the nature of any problems in planned new service delivery.

My favourite top 5 success stories so far:

Using Lifecycle diagrams to help decision makers think about the dichotomy regarding how we plan to make our administrative systems more cost-effective or fully-functioning versus the impact, ultimately, on user experience (from a holistic – cradle-to-grave – perspective). See elsewhere in this blog for lifecycles (or the link to my presentation at the JISC workshop).

Using a simple ICT maturity discussion diagram adapted from the excellent book “Enterprise Architecture As Strategy” to encourage non-IT seniors to think about long term IT investment and IT as an enabler for strategic business change.  ICT Maturity Diagram adapted from "Enterprise Architecture as Strategy" The JISC ICT Maturity toolkit is also helpful here.

Getting people to talk about operating models, as captured in the simple diagram below, again adapted from the Ross, Weill and Robertson EA book. Highlighting that there are four possible operating models to choose from can elucidate what are often sticking points or what tend to be circular arguments in debates about how to improve systems architectures. It took a while for the penny to drop for me as to why the operating model conversation is so fundamental, but it became clear the more I often I heard the question “what is the IT solution for this ‘mess’?”. Frequently my answer was “well, what do you want to achieve?”. For example, when asked what we should do about the overlaps and integration problems between SITS, Blackboard (our VLE) and several bespoke (and often very good) marks and assessment systems dotted around the University, part of my answer was:  do you want to unify the systems you’ve got into one corporate solution that all departments are asked to use from now on (simplifying matters and saving IT support costs), or is it the case that some departments have such discipline-specific needs around marks and assessments and their products are so strong that to remove them would cause reputational damage/failure to deliver on strategic goals? Do you want a coordinated solution that integrates all marks and assessment results at the summary level and leaves some departments to manage the process leading up to summary marks using their chosen systems, for example?


Having this sort of conversation – and fundamental decisions made and then governed by the right people structures – in a sense protects IT Services. Because if a decision has been taken to support a central, corporate system for some business function and to phase out local implementations, then the next time a department requests support from IT Services, IT Services knows they can say “no, sorry, we have been told to dedicate support to the central system not local implementations, person X has been talking to you about the migration plan”. Without a clear decision on whether the business operating model for some area will be diverse, unified, coordinated or replicated, then IT Services cannot be clear about where it should invest its resource, ie whether it should be investing time in data integration, or supporting many different systems (and probably a range of developer skills sets), or migrating legacy systems to centralised solutions, therefore investing in core developer skills sets and so on.

Helping to establish a common language. A lot of the EA role is about communication, and I think that good communication is facilitated by the use of precise language. By speaking with precise terminology personally it has been encouraging to see others connect with that terminology and reuse it themselves. Now when I talk with the business analysts for example, we all clarify whether we’re talking about the “as is” or the “to be” when looking at their process maps.  When helping to develop business cases for new projects we clarify with senior management whether we’re developing a strategic or tactical project. Elsewhere, in discussions I ask do you want a solution that caters for diversity or that supports a unified business process or some variation (i.e. the operating model discussion).

Resolving Boundary Disputes. These are where projects, for example, overlap in their goals, potentially resulting in a variety of problems such as confused business processes (multiple systems for the same task), inconsistent data (data in certain systems not updated at the same rate as in others), inefficient spending (buying and supporting more than one system with very similar functionality) or strategic disadvantage (such as surfacing several Web views of the same key University information with unintentionally diverse branding, ultimately offering a confusing/poor user experience across sites).  I’ve tended to run workshops to help resolve boundary disputes. The EA role here is one of facilitation – helping stakeholders to see their position in relation to holistic (i.e. enterprise level) goals, helping them to understand the strategic implications as well as the technical impact of options and to come to a joint consensus where possible. Sometimes a consensus is not possible and on these occasions decisions most likely have to pushed upwards for resolution at the appropriate senior level. In my experience so far, EA is as much about communication and understanding strategy, governance structures and processes within the organisation as it is about technology.

All in all a very enjoyable first year. The intern who resided with me in the early part of the first year rather aptly produced this picture of her impression of how hard a job being an EA can be in terms of balancing the range of activities that need to be undertaken: Post card from Sara
Over time I hope to paint a more elegant picture of the balancing act, I wonder if my yoga practice can help me here!:

Tree pose