This is no big surprise as all the past reports have pointed towards this growth and expansion -
Venturebeat * note that “Although the big data market will be nearly $50B by 2019 according to analysts, what’s most exciting is that the disruptive power of machine data analytics is only in its infancy. Machine analytics will be the fastest growing area of big data, which will have CAGR greater than 1000%.”
The move towards cloud based solutions opens up opportunities and it is not going to reverse. Following on from the trend in recent years yet more and more companies are increasing their use of cloud based solutions and along with this the opportunity to extract and collect data provides a potential for gleaning some information and knowledge from that data.
Suhale Kapoor, Co-Founder and Executive Vice President, Absolutdata Analytics * highlights “The fast shift to the cloud: The cloud has become a preferred information storage place. Its rapid adoption is likely to continue even in 2016. According to Technology Business Research, big data will lead to tremendous cloud growth; Revenues for top 50 public cloud providers shot up from 47% in the last quarter of 2013 to $ 6.2 billion"
It is not difficult to predict that in 2016 the cloud and the opportunities that open up for data, analytics and machine learning will becomes huge drivers for business
Applications will be designed to discover self improvement strategies as a new breed of log and machine data analytics, at the cloud layer, using predictive algorithms, enables; continuous improvement, continuous integration and continuous deployment. The application will learn from its users, in this sense the users will become the system architects teaching the system what they, the users, want and how the system is to deliver it to them.
Gartner view Advanced Machine Learning amongst the top trends to emerge in 2016 * with “advanced machine learning where deep neural nets (DNNs) move beyond classic computing and information management to create systems that can autonomously learn to perceive the world, on their own … (being particularly applicable to large, complex datasets) this is what makes smart machines appear "intelligent." DNNs enable hardware- or software-based machines to learn for themselves all the features in their environment, from the finest details to broad sweeping abstract classes of content. This area is evolving quickly, and organisations must assess how they can apply these technologies to gain competitive advantage.” the capability of systems to use advanced machine learning does not need to be confined to the information it finds outside it will also be introspective and be applied to the systems own itself and how it interfaces with human users.
A system performing data analytics needs to learn what questions it is being asked, how the questions are framed, as well as the vocabulary and the syntax the user chooses to ask those questions. No longer will the user be required to struggle with the structure of queries and programing language aimed at eliciting insight from data. The system will understand the users natural language requests such as “get me all the results that are relevant to my understanding of ‘x,y and z’ ”. The system will be able to do this because of the experience it has of the user/s asking these questions many times in structured programming languages (a corpus of language that the machine has long understood) and matching them to a new vocabulary that is more native to the non specialised user.
2016 will be the year these self learning applications emerge due to changes in the technology landscape for as Himanshu Sareen, CEO at Icreon Tech * points out this move to machine learning is being fuelled by the technology that is becoming available “Just as all of the major cloud companies (Amazon Web Services, Google, IBM, etc.) provide analytics as a service, so do these companies provide machine learning APIs in the cloud. These APIs allow everyday developers to ‘build smart, data-driven applications’ ” it would be a foolish if these developers did not consider a system that was not self learning.
Our prediction is that through 2016 many more applications will become self learning thanks to developments in deep learning technology
While the highly specialised roles of the programmer,the data scientist, and the data analyst are not going to disappear the exclusivity of the insights they have been part to is set to dissipate. Knowledge gleaned from data will not remain in the hands of the specialist and technology will once again democratise information. The need for easy to use applications providing self serve reports and self serve analysis is already recognised by business According to Hortonworks Chief Technology Officer Scott Gnau * “There is a market need to simplify big data technologies, and opportunities for this exist at all levels: technical, consumption, etc.” … “Next year there will be significant progress towards simplification,”
Data will become democratised, first from programmers, then from data scientists and finally from analysts as Suhale Kapoor, Co-Founder and Executive Vice President, Absolutdata remarks “Even those not specially trained in the field will begin to crave a more mindful engagement with analytics. This explains why companies are increasingly adopting platforms that allow end users to apply statistics, seek solutions and be on top of numbers.” … “Humans can’t possibly know all the right questions and, by our very nature, those questions are loaded with bias, influenced by our presumptions, selections and what we intuitively expect to see. In 2016, we’ll see a strong shift from presumptive analytics — where we rely on human analysts to ask the right, bias-free questions — toward automated machine learning and smart pattern discovery techniques that objectively ask every question, eliminating bias and overcoming limitations.”
“Historically, self-service data discovery and big data analyses were two separate capabilities of business intelligence. Companies, however, will soon see an increased shift in the blending of these two worlds. There will be an expansion of big data analytics with tools to make it possible for managers and executives to perform comprehensive self-service exploration with big data when they need it, without major handholding from information technology (IT), predicts a December study by business intelligence (BI) and analytics firm Targit Inc.” *…“Self-service BI allows IT to empower business users to create and discover insights with data, without sacrificing the greater big data analytics structures that help shape a data-driven organisation,” Ulrik Pedersen, chief technology officer of Targit, said in the report.
We are able to confidently predict that in 2016 more and more applications for analysing data will require less technical expertise.
The maturity of big data processing engines enable an agile exploration of data and agile analytics able to make huge volumes of disparate and complex data fathomable. Connecting and combining datasets unlocks the insights held across data silos and will be done in the automatically in the background by SaaS applications rather than by manually manipulating spreadsheets.
David Cearley, vice president and Gartner Fellow postulates a “The Device Mesh” that “refers to an expanding set of endpoints people use to access applications and information or interact with people, social communities, governments and businesses” and that "In the postmobile world the focus shifts to the mobile user who is surrounded by a mesh of devices extending well beyond traditional mobile devices," that are “increasingly connected to back-end systems through various networks” and “As the device mesh evolves, we expect connection models to expand and greater cooperative interaction between devices to emerge”.
In the same report Cearley says that “Information has always existed everywhere but has often been isolated, incomplete, unavailable or unintelligible. Advances in semantic tools such as graph databases as well as other emerging data classification and information analysis techniques will bring meaning to the often chaotic deluge of information.”
It is an easy prediction but, more and more data sets will be blended from different sources allowing more insights, this will be a noticeable trend that will emerge during 2016.
Having the ability to collect and explore complex data leads to an inevitable need to have a toolset to understand them. Tools that can present the information in these complex data as visual representations have been getting more mature and more widely adopted. Suhale Kapoor, Co-Founder and Executive Vice President, Absolutdata Analytics * “Visuals will come to rule: The power of pictures over words is not a new phenomenon – the human brain has been hardwired to favour charts and graphs over reading a pile of staid spreadsheets. This fact has hit data engineers who are readily welcoming visualisation softwares that enable them to see analytical conclusions in a pictorial format.”
The fact that visualisation do leverage knowledge from data will lead to more adaptive and dynamic visualisation tools “Graphs and charts are very compelling, but also static, sometimes giving business users a false sense of security about the significance — or lack of it — in the data they see represented. … data visualisation tools will need to become more than pretty graphs — they’ll need to give us the right answers, dynamically, as trends change … leading to dynamic dashboards … automatically populating with entirely new charts and graphs depicting up-to-the-minute changes as they emerge, revealing hidden insights that would otherwise be ignored”*
We predict that in 2016 a new data centric semiotic, a visual language for communicating data derived information, will become stronger, grow in importance and be the engine of informatics .