“Value through Iterative Innovation”

Waterfall Waterfall methodology is the most common project management approach in today’s workplace. It is based on a top-down approach to works and problem solving. Its strength is its processes that focus on the production of a tangible product and in any process related to compliance or regulation. In a waterfall model, each phase must be completed before the next phase can begin and there is no overlapping in the phases, hence the derivation of the name Waterfall. The Waterfall Model was the first Process Model to be introduced. It is also referred to as a linear-sequential life cycle model. Read More

The sequential phases:
• Requirement Gathering and analysis: All possible requirements of the system to be developed are captured in this phase and documented in a requirement specification doc.
• System Design: The requirement specifications from first phase are studied in this phase and system design is prepared. System Design helps in specifying hardware and system requirements and also helps in defining overall system architecture.
• Implementation: With inputs from system design, the system is first developed in small programs called units, which are integrated in the next phase. Each unit is developed and tested for its functionality which is referred to as Unit Testing.
• Integration and Testing: All the units developed in the implementation phase are integrated into a system after testing of each unit. Post integration the entire system is tested for any faults and failures.
• Production/Implementation: Once the functional and non-functional testing is done, the product is deployed in the customer environment or released into the market.
• Maintenance: There are some issues which come up in the client environment. To fix those issues patches are released. Also to enhance the product some better versions are released. Maintenance is done to deliver these changes in the customer environment.

The principle advantage is that Waterfall SDLC allows for departmentalization and control. A schedule can be set with deadlines for each stage of development and a product can proceed through the development process model phases one by one.
Development moves from concept, through design, implementation, testing, installation, troubleshooting, and ends up at operation and maintenance.
Each phase of development proceeds in strict order.

The disadvantage of waterfall development is that it does not allow for much reflection or revision. Once an application is in the testing stage, it is very difficult to go back and change something that was not well-documented or thought upon in the concept stage.
The solution can only be measured and quantified for value at the end of the project Reduce

AGILE The term ‘agile’ was created in 2001 ( when a group of ‘independent thinkers around software development’ came together to talk about an alternative to the heavyweight, document-driven processes that existed at the time. Known as the ‘Waterfall method’, these old-fashioned processes comprised a sequence of technical phases that were slow and struggled to respond to changing requirements, particularly when they were mired in too much detail from the start. Read More

The group was already working in ways that later become described as agile; an output from this meeting was the Manifesto for Agile Software Development, or the ‘Agile Manifesto’ as it is more commonly known, and its impact and success have been quite dramatic. The Agile Manifesto contains 12 principles. The Agile movement seeks alternatives to traditional project management. Agile approaches help teams respond to unpredictability through incremental, iterative work cadences and empirical feedback. Agilists propose alternatives to waterfall, or traditional sequential development. Agile development methodology provides opportunities to assess the direction of a project throughout the development lifecycle.
This is achieved through regular cadences of work, known as sprints or iterations, at the end of which teams must present a potentially shippable product increment. By focusing on the repetition of abbreviated work cycles as well as the functional product they yield, agile methodology is described as “iterative” and “incremental.” In waterfall, development teams only have one chance to get each aspect of a project right. In an agile paradigm, every aspect of development — requirements, design, etc. — is continually revisited throughout the lifecycle. When a team stops and re-evaluates the direction of a project every two weeks, there’s always time to steer it in another direction.

The results of this “inspect-and-adapt” approach to development greatly reduce both development costs and time to market. Because teams can develop software at the same time they’re gathering requirements, the phenomenon known as “analysis paralysis” is less likely to impede a team from making progress. And because a team’s work cycle is limited to two weeks, it gives stakeholders recurring opportunities to calibrate releases for success in the real world. Agile development methodology helps companies build the right product. Instead of committing to market a piece of software that hasn’t even been written yet, agile empowers teams to continuously replan their release to optimize its value throughout development, allowing them to be as competitive as possible in the marketplace. Development using an agile methodology preserves a product’s critical market relevance and ensures a team’s work doesn’t wind up on a shelf, never released. Reduce

SCRUM Scrum is a simple, people-centric framework for organizing and managing work. It is built on a specific set of foundation values, principle and practices. Practitioners & adopters typically add their own unique approaches to the Scrum framework, creating a version of Scrum that is unique to their circumstances, whilst ensuring the core values of Iterative development remains relatively unchanged. Scrum has been focused and used primarily for software development in order to improve speed of development and adaptation to customer needs and values. Read More

Scrum development efforts consist of one or more Scrum teams, each made up of three Scrum roles: Product Owner, Scrum Master, and the Development team. There can be other roles when using Scrum, but the Scrum framework requires only the three as listed. The Product Owner is the empowered central point of product leadership, He is sometimes known as the face of the business and ensure value is produced. He decides which features and functionality to build and the order in which to build them. The Scrum Master acts as coach, facilitator, and impediment remover. this role helps everyone involved understand and embrace the Scrum values, principles, and practices to help the organization obtain exceptional results from applying Scrum. The development team is a diverse, cross-functional collection of all of the types of people needed to design, build, and test a desired product. The development team self-organizes to determine the best way to accomplish the goal set out by the product owner. Development teams can be as small as three people but are typically five to nine people in size. Reduce
XP – Extreme Programming Beck calls XP a “lightweight methodology” that challenges the assumption that getting the software right the first time is the most economical approach in the long run. Beck’s fundamental idea is to start simply, build something real that works in its limited way, and then fit it into a design structure that is built as a convenience for further code building rather than as an ultimate and exhaustive structure after thorough and time-consuming analysis. Rather than specialize, all team members write code, test, analyze, design, and continually integrate code as the project develops. Because there is much face-to-face communication, the need for documentation is minimized. Read More

Extreme Programming emphasizes teamwork. Managers, customers, and developers are all equal partners in a collaborative team. Extreme Programming implements a simple, yet effective environment enabling teams to become highly productive. The team self-organizes around the problem to solve it as efficiently as possible.
Extreme Programming improves a software project in five essential ways; communication, simplicity, feedback, respect, and courage. Extreme Programmers constantly communicate with their customers and fellow programmers. They keep their design simple and clean. They get feedback by testing their software starting on day one. They deliver the system to the customers as early as possible and implement changes as suggested. Every small success deepens their respect for the unique contributions of each and every team member. With this foundation Extreme Programmers are able to courageously respond to changing requirements and technology.
The most surprising aspect of Extreme Programming is its simple rules. Extreme Programming is a lot like a jig saw puzzle. There are many small pieces. Individually the pieces make no sense, but when combined together a complete picture can be seen. The rules may seem awkward and perhaps even naive at first, but are based on sound values and principles.
Our rules set expectations between team members but are not the end goal themselves. You will come to realize these rules define an environment that promotes team collaboration and empowerment, that is your goal. Once achieved productive teamwork will continue even as rules are changed to fit your company’s specific needs. Reduce
DevOps is a descriptive not a prescriptive concept whose purpose is to increase collaboration, reduce waste and automate as much of the processes involved. DevOps emphasizes the importance of communication and collaboration between software developers and production IT professionals, while automating the deployment of software and infrastructure changes. Ultimately DevOps attempts to create a working environment in which building, testing, and deploying software can occur rapidly, frequently and reliably. In turn, this enables an organization to expose value more quickly, allowing for a faster turnaround time in the deployment of new features, security patches, and bug fixes. Read More

The term “DevOps” typically refers to the emerging professional movement that advocates a collaborative working relationship between Development and IT Operations, resulting in the fast flow of planned work, while simultaneously increasing the reliability, stability, resilience and security of the production environment. Why Development and IT Operations? Because that is typically the value stream that is between the business (where requirements are defined) and the customer (where value is delivered).

The origins of the DevOps movement are commonly placed around 2009, as the convergence of numerous adjacent and mutually reinforcing movements:
• The Velocity Conference movement, especially the seminal “10 Deploys A Day” presentation given by John Allspaw and Paul Hammond
• The “infrastructure as code” movement (Mark Burgess and Luke Kanies), the “Agile infrastructure” movement (Andrew Shafer) and the Agile system administration movement (Patrick DeBois)
• The Lean Startup movement by Eric Ries
• The continuous integration and release movement by Jez Humble
• The widespread availability of cloud and PaaS (platform as a service) technologies (e.g.,Amazon Web Services)

DevOps evangelizes popular programming concepts of agile development, continuous integration, and continuous delivery and extends that ethos into the social aspect of IT by placing a premium on the importance of tearing down walls that divide development, operations, support, and management teams. A survey of 4,600 IT professionals by Puppet in June 2016 found that IT departments with a robust DevOps workflow deploy software 200 times more frequently than low-performing IT departments. In addition, they have 24 times faster recovery times, and three times lower rates of change failure, while spending 50% less time overall addressing security issues, and 22% less time on unplanned work.

While the concept of continuous delivery—and by extension, DevOps—may be counter intuitive to some, the end goal of frequent software deployments is to make the process so routine as to be a non-event, as opposed to a disruptive major roll out.

The 3 Ways
The First Way emphasizes the performance of the entire system, as opposed to the performance of a specific silo of work or department — this as can be as large a division (e.g., Development or IT Operations) or as small as an individual contributor (e.g., a developer, system administrator).

The focus is on all business value streams that are enabled by IT. In other words, it begins when requirements are identified (e.g., by the business or IT), are built in Development, and then transitioned into IT Operations, where the value is then delivered to the customer as a form of a service.

The outcomes of putting the First Way into practice include never passing a known defect to downstream work centers, never allowing local optimization to create global degradation, always seeking to increase flow, and always seeking to achieve profound understanding of the system (as per Deming).

The Second Way is about creating the right to left feedback loops. The goal of almost any process improvement initiative is to shorten and amplify feedback loops so necessary corrections can be continually made.

The outcomes of the Second Way include understanding and responding to all customers, internal and external, shortening and amplifying all feedback loops, and embedding knowledge where we need it.

The Third Way is about creating a culture that fosters two things: continual experimentation, taking risks and learning from failure; and understanding that repetition and practice is the prerequisite to mastery.

We need both of these equally. Experimentation and taking risks are what ensures that we keep pushing to improve, even if it means going deeper into the danger zone than we’ve ever gone. And we need mastery of the skills that can help us retreat out of the danger zone when we’ve gone too far.

The outcomes of the Third Way include allocating time for the improvement of daily work, creating rituals that reward the team for taking risks, and introducing faults into the system to increase resilience. Reduce

Scaling Scrum Large-Scale Scrum is Scrum scaled up to multiple teams. It isn’t a bigger process that includes Scrum, but is Scrum at its core. Read More

Exponents of Scaling Scrum are with the Nexus Guide – Nexus is a framework that drives to the heart of scaling: cross-team dependencies and integration issues. It is an exoskeleton that rests on top of multiple Scrum Teams who work together to create an Integrated Increment. It builds on the Scrum framework and values. The result can be an effective development group of up to 100 people. For larger initiatives, there is Nexus+, a unification of more than one Nexus. with the Scaling Scrum fundamentals with LeSS (Large Scale Scrum) – Scaling Scrum starts with understanding standard one-team Scrum. From that point, your organization must be able to understand and adopt LeSS, which requires examining the purpose of one-team Scrum elements and figuring out how to reach the same purpose while staying within the constraints of the standard Scrum rules. – SAFe’s practices are grounded on nine fundamental principles that have evolved from Agile principles and methods, Lean product development, systems thinking, and observation of successful enterprises. Reduce
LEAN traces its roots back to people’s desire to create products. It consists of continuously evolving concepts and deeper thinking on business performance. Since Lean principles are applied in many contexts, tools and methods have multiple sources. However, many of the iconic elements of Lean come from the Toyota Production System. Lean helps to focus on customer value. By doing so, organizations add more value to their products and services while reducing sources of waste and increasing their agility and ability to adapt. An improved dialogue and connection with customers and end-users enables an IT organization to drastically increase the loyalty of satisfied customers Read More

A consequence of Lean is a paradigm shift in the way we think. It challenges our assumptions of how work is supposed to be done and how responsibilities are supposed to be executed.
The Lean organization continuously improves process performance which offers them great strategic value. Their services are of better quality, their delivery times are shorter and their efficiency of development and deployment keeps increasing. The most important asset for a ‘knowledge worker’ organization is its people: Lean promises higher involvement and motivation of employees. Additionally are financial benefits to be expected from reducing process waste, optimizing value-adding work that frees up time to add even more value. also, the reduction of the duration between order intake and delivery will improve cash flow. It must be stressed, however, that increasing profit margins is not the primary goal of Lean, although it can be expected to be a secondary effect of improving and thereby reducing effort spent on non-value adding activities. Reduce
Six Sigma is a disciplined, data-driven approach and methodology for eliminating defects (driving toward six standard deviations between the mean and the nearest specification limit) in any process – from manufacturing to transactional and from product to service. Read More

Six Sigma is now according to many business development and quality improvement experts, the most popular management methodology in history. Six Sigma is certainly a very big industry in its own right, and Six Sigma is now an enormous ‘brand’ in the world of corporate development. Six Sigma began in 1986 as a statistically-based method to reduce variation in electronic manufacturing processes in Motorola Inc in the USA. Today, twenty-something years on, Six Sigma is used as an all-encompassing business performance methodology, all over the world, in organizations as diverse as local government departments, prisons, hospitals, the armed forces, banks, and multi-nationals corporations. While Six Sigma implementation continues apace in many of the world’s largest corporations, many organizations and suppliers in the consulting and training communities have also seized on the Six Sigma concept, to package and provide all sorts of Six Sigma ‘branded’ training products and consultancy and services.

While Six Sigma has become a very widely used ‘generic’ term, the name Six Sigma is actually a registered trademark of Motorola Inc., in the USA, who first pioneered Six Sigma methods in the 1980’s.

six sigma central concepts

You will gather from the definitions and history of Six Sigma that many people consider the model to be capable of leveraging huge performance improvements and cost savings.

None of this of course happens on its own. Teams and team leaders are an essential part of the Six Sigma methodology.

Six Sigma is therefore a methodology which requires and encourages team leaders and teams to take responsibility for implementing the Six Sigma processes. Significantly these people need to be trained in Six Sigma’s methods – especially the use of the measurement and improvement tools, and in communications and relationship skills, necessary to involve and serve the needs of the internal and external customers and suppliers that form the critical processes of the organization’s delivery chains.

Training is therefore also an essential element of the Six Sigma methodology, and lots of it.

Consistent with the sexy pseudo-Japanese ‘Six Sigma’ name (Sigma is in fact Greek, for the letter ‘s’, and a long-standing symbol for a unit of statistical variation measurement), Six Sigma terminology employs sexy names for other elements within the model, for example ‘Black Belts’ and ‘Green Belts’, which denote people with different levels of expertise (and to an extent qualifications), and different responsibilities, for implementing Six Sigma methods.

Six Sigma teams and notably Six Sigma team leaders (‘Black Belts’) use a vast array of tools at each stage of Six Sigma implementation to define, measure, analyse and control variation in process quality, and to manage people, teams and communications.

When an organization decides to implement Six Sigma, first the executive team has to decide the strategy – which might typically be termed an improvement initiative, and this base strategy should focus on the essential processes necessary to meet customer expectations.

This could amount to twenty or thirty business process. At the top level these are the main processes that enable the organization to add value to goods and services and supply them to customers. Implicit within this is an understanding of what the customers – internal and external – actually want and need.

A team of managers (‘Black Belts’ normally) who ‘own’ these processes is responsible for:

identifying and understanding these processes in detail, and also
understanding the levels of quality (especially tolerance of variation) that customers (internal and external) expect, and then
measuring the effectiveness and efficiency of each process performance – notably the ‘sigma’ performance – ie., is the number of defects per million operations (pro-rate if appropriate of course).

The theory is entirely logical: understanding and then improving the most important ‘delivery-chain’ processes will naturally increase efficiency, customer satisfaction, competitive advantage, and profitability.

Easily said – tricky to achieve – which is what the Six Sigma methodology is for.

Most practitioners and users of Six Sigma refer to Motorola’s early DMAIC acronym (extended since to DMAICT) as a way of reinforcing and reminding participants what needs to be done:
six sigma DMAIC and DMAICT process elements

D – Define opportunity
M – Measure performance
A – Analyse opportunity
I – Improve performance
C – Control performance, and optionally:
T – Transfer best practice (to spread the learning to other areas of the organization)

Motorola emphasises that in order for Six Sigma to achieve ‘breakthrough improvements’ that are sustainable over time, Six Sigma’s ‘process metrics’ and ‘structured methodology’ must be extended and applied to ‘improvement opportunities’ that are directly linked to ‘organizational strategy’. It is difficult to argue with the logic. There is little point in measuring and improving things that have no significant impact on the strategically important organizational processes.

Six Sigma team leaders (Black Belts) work with their teams (team members will normally be people trained up to ‘Green Belt’ accreditation) to analyse and measure the performance of the identified critical processes. Measurement is typically focused on highly technical interpretations of percentage defects (by a which a ‘sigma’ measurement is arrived at – see the one-to-six sigma conversion scale below), and a deep detailed analysis of processes, involving organizational structures and flow-charts. Many other tools for performance measurement and analysis are used, for example the ‘balanced scorecard’ method, and ‘process mapping’, etc., depending on the processes and systems favoured by the team leaders and project statisticians, and what needs to be measured and analysed. Six Sigma does not stipulate specifically what analytical methods must be used – the organization and particularly the team leaders decide these things, which is why implementation and usage of Six Sigma varies so widely, and why Six Sigma will continue to evolve. Any analytical tool can be included within Six Sigma implementation.

Six Sigma experts and commentators commonly refer to typical failure rates of organizations that have not put particular pressure on their quality performance levels. Aside from anything else this at least helps to put the ‘Sigma’ terminology into a simpler mathematical context:

It is said that many ordinary businesses actually operate at between three and two and sigma performance. This equates to between approximately 66,800 and 308,500 defects per million operations, (which incidentally is also generally considered to be an unsustainable level of customer satisfaction – ie., the business is likely to be in decline, or about to head that way). Bear in mind that an ‘operation’ is not limited to the manufacturing processes – an ‘operation’ can be any process critical to customer satisfaction, for example, the operation of correctly understanding a customer request, or the operation of handling a customer complaint. Six Sigma is not restricted to engineering and production – Six Sigma potentially covers all sorts of service-related activities. What matters is that the operation is identified as being strategically critical and relevant to strategy and customer satisfaction.

A measurement of four sigma equates to approximately 6,200 DPMO, or around 99.4% perfection. This would arguably be an acceptable level of quality in certain types of business, for instance a roadside cafe, but a 99.4% success rate is obviously an unacceptable level of quality in other types of business, for example, passenger aircraft maintenance.

A measurement of five sigma equates to just 233 defects per million opportunities, equivalent to a 99.98% perfection rate, and arguably acceptable to many businesses, although absolutely still not good enough for the aircraft industry. Reduce

KANBAN Kanban is a technique for managing a software development process in a highly efficient way. Kanban underpins Toyota’s “just-in-time” (JIT) production system. Although producing software is a creative activity and therefore different to mass-producing cars, the underlying mechanism for managing the production line can still be applied.
A software development process can be thought of as a pipeline with feature requests entering one end and improved software emerging from the other end.
Inside the pipeline, there will be some kind of process which could range from an informal ad hoc process to a highly formal phased process. In this article, we’ll assume a simple phased process of: (1) analyse the requirements, (2) develop the code, and (3) test it works. Read More

The Effect of Bottlenecks
A bottleneck in a pipeline restricts flow. The throughput of the pipeline as a whole is limited to the throughput of the bottleneck.

Kanban reveals bottlenecks dynamically

Kanban is incredibly simple, but at the same time incredibly powerful. In its simplest incarnation, a kanban system consists of a big board on the wall with cards or sticky notes placed in columns with numbers at the top.
Limiting work-in-progress reveals the bottlenecks so you can address them.

cards represent work items as they flow through the development process represented by the columns. The numbers at the top of each column are limits on the number of cards allowed in each column.

The limits are the critical difference between a kanban board and any other visual storyboard. Limiting the amount of work-in-progress (WIP), at each step in the process, prevents overproduction and reveals bottlenecks dynamically so that you can address them before they get out of hand.

• Kanban systems are from a family of approaches known as pull systems.
• Eliyahu Goldratt’s Drum-Buffer-Rope application of the Theory of Constraints is an alternative implementation of a pull system.
• The motivation for pursuing a pull-system approach was two-fold: to find a systematic way to achieve a sustainable pace of work, and to find an approach to introducing process changes that would meet with minimal resistance.
• Kanban is the mechanism that underpins the Toyota Production System and its kaizen approach to continuous improvement.
• The first virtual kanban system for software engineering was implemented at Microsoft beginning in 2004.
• Results from early Kanban implementations were encouraging with regard to achieving sustainable pace, minimizing resistance to change through an incremental evolutionary approach, and producing significant economic benefits.
• The Kanban Method as an approach to change started to grow in community adoption after the Agile 2007 conference in Washington, D.C., in August 2007.
• Throughout this text, “kanban” (small “k”) refers to signal cards, and “kanban system” (small “k”) refers to a pull system implemented with (virtual) signal cards.
• Kanban (capital “K”) is used to refer to the methodology of evolutionary, incremental process improvement that emerged at Corbis from 2006 through 2008 and has continued to evolve in the wider Lean software development community in the years since. Reduce

Prince2 / Prince 2 Agile The term ‘agile’ is very broad and is viewed in many different ways throughout the agile community. There is a set of well-known frameworks referred to as ‘agile methods’ and there are also well-known behaviours, concepts and techniques that are recognized as characterizing the agile way of working. But there is no single definition of agile that accurately encapsulates them all, although the Agile Manifesto comes the closest to achieving this. PRINCE2 Agile describes how to configure and tune PRINCE2 so that PRINCE2 can be used in the most effective way when combining it with agile behaviours, concepts, frameworks and techniques. Read More

PRINCE2 and PRINCE2 Agile are only suitable for use on projects, whereas agile can be used for projects and routine ongoing work as well. Throughout this manual, routine ongoing work is referred to as ‘business as usual’ (BAU) and covers such areas as ongoing product development, product maintenance and continual improvement.

The distinction between project work and BAU work is important because some of the agile ways of working need to be applied differently in each situation. Therefore, when carrying out a piece of work it is important to understand the type of work being undertaken, to ensure that it is addressed in the appropriate way and that agile is used appropriately.

What does BAU look like?
BAU work would typically be repeatable routine tasks that can be carried out by people with the appropriate technical skills without needing to be managed by a project manager. An example of this would be when modifications or enhancements need to be made to an existing product and the timescales are relatively short. There would usually be a long list of these tasks arriving regularly throughout the lifespan of the product. There may be an established team dedicated to this work.

What does a project look like?
A project is a temporary situation where a team is assembled to address a specific problem, opportunity or change that is sufficiently difficult that it cannot be handled as BAU. It may even be a collection of BAU items handled collectively. An example of a project would be where a new product or service is being created – there may be a need to engage many stakeholders and a significant amount of uncertainty exists. The project team may be based in different locations, the team personnel may change, the project may last a long time and it may be part of a wider programme of work. Importantly, it needs to be managed by a project manager. Reduce

ITIL is a framework providing best practice guidelines on all aspects of end to end service management. It covers complete spectrum of people, processes, products and use of partners. ITIL® enables organizations to utilize leading edge IT capabilities to provide world class services and maximize value. Employing IT service management best practices described in ITIL®, organizations have been proven to increase productivity, optimize costs and improve customer experience. ITIL is split into 5 core areas: Service Strategy, Service Design, Service Transition, Service Operation and Continual Service Management. Read More

ITIL service strategy – specifies that each stage of the service lifecycle must stay focused upon the business case, with defined business goals, requirements and service management principles.
ITIL service design – provides guidance for the production and maintenance of IT policies, architectures and documents.
ITIL service transition – focuses upon change management role and release practices, providing guidance and process activities for transitioning services into the business environment.
ITIL service operation – focuses upon delivery and control process activities based on a selection of service support and service delivery control points.
ITIL continual service improvement – focuses upon the process elements involved in identifying and introducing service management improvements, as well as issues surrounding service retirement.

ITIL helps business managers and IT managers to deliver services to the customers in effective manner and hence gaining the customer’s confidence and satisfaction. Here are the areas where ITIL plays an effective role:
• IT and business strategic planning
• Integrating and aligning IT and business goals
• Implementing continuous improvement
• Acquiring and retaining the right resources and skill sets
• Reducing costs and the Total Cost of Ownership
• Demonstrating the business value to IT
• Achieving and demonstrating Value for Money and Return on Investment.
• Measuring IT organization effectiveness and efficiency
• Developing business and IT partnerships and relationships
• Improving project delivery success
• Managing constant business and IT change

The key benefits of ITIL® Adopting and adapting ITIL® according to each organization’s specific requirements enables service providers, regardless of type, size or location, to:
• Support business outcomes;
• Enable business change;
• Optimize customer experience;
• Manage risk in line with business needs;
• Show value for money;
• Continually improve; Reduce

GOVERNANCE The field of IT Governance emerged as a derivative subset of Corporate Governance in the early 90s, after high profile governance failures in the 1980s had prompted the development of established codes for corporate governance. It was recognized that specific attention should be paid to the role of information and the underpinning technology if good overall corporate governance were to be achieved.
The three goals of IT Governance are to ensure that IT creates business value, to direct and monitor management, and to mitigate IT-related risks. Simply speaking, IT Governance sets out to maximize the value for money achieved by IT spending – whether this is creating shareholder return in the private sector, or improving service levels in the public sector. Read More

Since its inception, IT Governance practices have had a great influence on how IT is looked on, such that IT has since grown from being viewed as a an enabler of corporate governance, to being recognized as a resource and value creator in its own right.

As a framework it ensures the organisation’s IT infrastructure supports and enables the achievement of the corporate strategies and objectives.
The sub-domains of IT governance include:
Business continuity and disaster recovery
Regulatory compliance
Information governance and information security
IT Service Management, including ITIL® and Service Level Management
Knowledge Management, including Intellectual Capital
Project governance
Risk management
IT Governance Auditing

As IT governance plays such a key role in strategic performance, internal auditors are expected to include auditing IT governance in their work plans.

ISO/IEC 38500
The world’s formal international IT governance Standard, ISO/IEC 38500, was published in June 2008. It built upon the trail-blazing work done by the Australian Standards Institute, which published AS 8015 in 2005. ISO/IEC 38500 sets out a very straightforward framework for the board’s governance of Information and Communications Technology. Irrespective of its geographic origin, the standard is a key resource for IT governance professionals everywhere in the world.

‘IT Governance Frameworks’

There are three widely recognised, vendor-neutral, third party frameworks that are often described as ‘IT governance frameworks’. While on their own they are not completely adequate to that task, each has significant IT governance strengths.

ITIL® – or IT Infrastructure Library®, was developed by the UK’s Cabinet Office as a library of best practice processes for IT service management. Widely adopted around the world, ITIL is supported by ISO/IEC 20000:2011, against which independent certification can be achieved.

COBIT – first drafted in 1996 by ISACA, is a generic and process-based framework geared for larger organizations, and is considered the world’s leading IT governance framework – with an emphasis on control over information, IT and related risks, and auditability. Control Objectives for Information and Related Technology (COBIT) is an IT governance control framework that helps organisations meet today’s business challenges in the areas of regulatory compliance, risk management and aligning IT strategy with organisational goals. COBIT is an internationally recognised framework and was updated from version 4.1 to version 5 in 2012. In particular, COBIT’s Management Guidelines component contains a framework for the control and measurability of IT by providing tools to assess and measure the enterprise’s IT capability for the 37 identified COBIT processes.
Read more about COBIT here.

ISO27002 – (supported by ISO27001), is the global best practice Standard for information security management in organisations.

These three frameworks are all, potentially, part of any best-practice approach to regulatory and corporate governance compliance. The challenge, for many organisations, is to establish a coordinated, integrated framework that draws on all three of these standards.

The Joint Framework, put together by the ITGI (owners of COBIT) and the OGC (owners of ITIL) is a significant step in the right direction.

Governance of the Extended Enterprise, published by the IT Governance Institute, explores how some of the world’s most successful enterprises have integrated information technology with business strategies, culture, and ethics to optimise information value, attain business objectives, and capitalise on technologies in highly competitive environments.
Green IT

An increasingly relevant subject requiring consideration within the sphere of IT Governance is the issue of Green IT. In the same way that IT Governance is a critical component within the Corporate Governance of an organisation, Green IT has become an essential aspect within the decision making, framework building, and business processes, of IT Governance. Reduce