IT Ops Challenge
Each layer of technology in the data centre is becoming progressively more complex to control and manage. The average server environment now has thousands of configuration parameters (e.g. Windows OS contains – 1,500+, IBM WebSphere Application Server – 16,000+, and Oracle WebLogic – 60,000+). The growing interdependence and complexity of interaction between applications also makes it increasingly difficult to manage and control business services.
IT change is very much a fact of life and it takes place at every level of the application and infrastructure stack. It also impacts pretty much every part of the business! To meet these development challenges businesses have adopted agile development processes to accelerate application release schedules. By employing practices such as continuous integration and continuous build they are able to generate hundreds of production changes each day. For example, eBay is estimated as having around 35,000 changes per year!
Industry analyst firm Forrester have stated that, “If you can’t manage today’s complexity, you stand no chance managing tomorrow’s. With each passing day, the problem of complexity gets worse. More complex systems present more elements to manage and more data, so growing complexity exacerbates an already difficult problem. Time is now the enemy because complexity is growing exponentially and inexorably.”
The tools we use to manage IT infrastructure have been around for many years but are only capable of measuring what has happened. They are also not designed to deal with the complexity and dynamics of our modern IT technologies. IT operation teams need to be able to automate the collection and analysis of vast quantities of data down to the finest resolution and be able to highlight any changes to unify the various operations silos. None of the traditional tools are up to this ‘big data’ problem!
Big data for operations is still a relatively new paradigm and Gartner has defined the sector as “IT Operations Analytics.” and one that can enable smarter and faster decision-making in a dynamic IT environment with the objective to deliver better services to your customers. Forrester Research defines IT analytics as “The use of mathematical algorithms and other innovations to extract meaningful information from the sea of raw data collected by management and monitoring technologies.”
Despite its relative age a lot has already moved on and here are a few interesting findings:
- Customer analytics (48%), operational analytics (21%), and fraud & compliance (21%) are now the top three uses for Big Data.
- 15% of enterprises will use IT operations analytics technologies to deliver intelligence for both business execution and IT operations.
- The market to be mainstream in 2018, making up 10% of the $20+ billion IT Operations Management software category.
- 89% of business leaders believe big data will revolutionize business operations in the same way the Internet did.
- 79% agree that ‘companies that do not embrace Big Data will lose their competitive position and may even face extinction
Where to use ITOA?
IT Operations Analytics (ITOA) is also known as Advanced Operational Analytics, or IT Data Analytics) and encapsulate technologies that are primarily used to discover complex patterns in high volumes of ‘noisy’ IT system availability and performance data. Gartner has outlined five core applications for ITOA:
- Root Cause Analysis: The models, structures and pattern descriptions of IT infrastructure or application stack being monitored can help users pinpoint fine-grained and previously unknown root causes of overall system behavior pathologies.
- Proactive Control of Service Performance and Availability: Predicts future system states and the impact of those states on performance.
- Problem Assignment: Determines how problems may be resolved or, at least, direct the results of inferences to the most appropriate individuals or communities in the enterprise for problem resolution.
- Service Impact Analysis: When multiple root causes are known, the analytics system’s output is used to determine and rank the relative impact, so that resources can be devoted to correcting the fault in the most timely and cost-effective way possible.
- Complement Best-of-breed Technology: The models, structures and pattern descriptions of IT infrastructure or application stack being monitored are used to correct or extend the outputs of other discovery-oriented tools to improve the fidelity of information used in operational tasks (e.g., service dependency maps, application runtime architecture topologies, network topologies).
- Real time application behavior learning: Learns & correlates the behavior of application based on user pattern and underlying Infrastructure on various application patterns, create metrics of such correlated patterns and store it for further analysis.
- Dynamically Baselines Threshold: Learns behavior of Infrastructure on various application user patterns and determines the optimal behavior of the Infra and technological components, bench marks and baselines the low and high water mark for the specific environments and dynamically changes the bench mark baselines with the changing infra and user patterns without any manual intervention
By employing advanced analytics to harness vast volumes of highly diverse data from various applications and endpoints across an organisation’s IT infrastructure, ITOA solutions provide IT service desks with instant awareness of issues as they occur – and often before the person at the other end is able to acknowledge. Along with awareness, they deliver an understanding of how these issues could in turn affect both the IT infrastructure and the wider business.
IT operations teams are being challenged to run larger, more complex, hybrid and geographically dispersed IT systems that are constantly in a state of change without growing the number of people or resources. Everything from system successes to system failures and all points in between are logged and saved as IT operations data. IT services, applications, and technology infrastructure generate data every second of every day. All that raw, unstructured, or polystructured data is critical in managing IT operations successfully. The problem is that doing more with less requires a level of efficiency that can only come from complete visibility and intelligent control based on the detailed information coming out of IT systems.
ITOA provides a set of powerful tools that can generate the necessary insight to help IT operations teams to proactively determine risks, impacts, or the potential for outages that may come out of various events that take place in the environment. Allowing a new way for operations to proactively manage IT system performance, availability, and security in complex and dynamic environments with less resources and greater speed. ITOA contributes both to the top and bottom line of any organization by cutting IT operations costs and increasing business value through greater user experience and reliability of business transactions.
ITOA technologies are still relatively immature and Gartner have stated that it will take another 2-5 years for them to reach maturity. However, the smart MSP’s are moving fast to incorporate these technologies in their portfolio’s and IT consumers to demand these technologies from their partners. In the next few years it is forecast that the vast majority of Global 2000 companies will have deployed IT Operations Analytics Platforms as a central component of their architecture for monitoring critical applications and IT services. The key message is If you have not already started to look at ITOA it is time to start planning…