Home » Business Topics » Data Strategist

Trends Towards 2022

  • Kurt Cagle 

9982552264

It’s the last week of the year. The gifts have been opened (well, the ones that aren’t currently still sitting in a dock in Los Angeles after being ordered in November), the cats have been teaching tree ornaments the meaning of the word gravity, and the cookies which tasted so good on Christmas Eve are getting more than a bit stale. In short, it’s time to fire up the old word processor, stare deeply into the dark pixels on the screen (and wonder if maybe it’s time to get a new computer if you hadn’t just blown your entire budget on game subscriptions for the kids), and write up The Predictions For The Coming Year!!!! (yay.)

When I started writing these things (when I could still actually see my feet, about twenty years ago), I wrote with great earnestness, trying to figure out deep trends that I could pass on to you, my gentle readers. Now, it’s just a reality check for me, a chance to figure out what I should be paying attention to in the coming year. If it’s useful to you, then I’m glad. If not, well, seriously, how could it not be useful? Sheesh, some people.

The usual rules apply. Do not make investment decisions based on what I say here (oh, please, PLEASE don’t even think about it). I am not a lawyer, nor a doctor, nor a butcher, baker, or candlestick maker, though I have played one or two of these on TV. Do not try this at home. Or at work, which is likely at home, nowadays.

No alt text provided for this image

The Omega Strain Is Coming!

If Covid-19 has done nothing else, it has taught an entire generation the Greek Alphabet. We are halfway through with Omicron, which is now fast displacing Delta as the virus du jour. Assuming that we follow the same pattern as last year, I anticipate one more major variant around the end of the alphabet, which puts us at Omega, which cues up all those Dan Brown conspiracy theories involving skeletal figures with scythes and black robes and the Illuminati and anti-protons sealed away in a test tube and the Knights Templar hiding away in Scotland and … yeah, the plot gets a bit threadbare at times, but that’s what we can expect from Covid. Or maybe the Knights that Say “Ni”.

Once we get past Omega, we’ll be in Loki Variant territory. Just saying.

Seriously, I think we may be at a stage where it’s endemic, and most of the effort ends up going into mitigation of symptoms. Indeed, one interesting thing coming out of South Africa now, where the omicron variant first appeared, is how quickly it is both spreading, and how quickly cases are dropping after the surge. This is a very good sign, all things considered because it indicates that the Pandemic may actually be nearing its end state. If the same pattern takes place now in the United States, we could be mostly past the pandemic by mid-Spring, with it being replaced by Covid simply being an omnipresent, endemic disease like influenza – potentially deadly, but simply unpleasant for most people.

No alt text provided for this image

Supply Chains Unsnarling, Inflation Quieting, Economy Cooling

I’ve used the analogy of the current supply chain problems compared to a multi-car traffic pile-up, such as the one that occurred in Wisconsin recently. The conditions – icy roads and poor visibility – set the stage for a 100 car pile-up that took more than a day to clear. It took so long because of blocking conditions – cars blocked travel for aid vehicles, the limited number of first responders meant that police, firefighters, and tow trucks were stretched to the limit, and capacity to store damaged vehicles was limited. This is typical of system shock waves.

The important thing to realize with these kinds of systemic events is that while they can bring any kind of system to its figurative knees, once the wave passes, things do eventually recover. In the case of supply chains, there were several issues that caused supply chains to seize up: a large (and highly uncertain) drop in demand due to the pandemic, coupled with a critical need for certain commodities such as toilet paper, left producers scrambling, and ports snarled. This was then compounded with workers getting hit with Covid, many ports overreacting by cutting down on workers then scrambling as demand came surging back. On top of all of this, Covid has proven to be a signal that people who were close to or beyond retirement but still working used to tell them it was time to do so, coinciding with a generational high number of retirements.

This resulting surge has meant many goods not getting to market in time, hence pushing up prices, coupled with spot gluts as supply briefly outpaced demand. President Biden’s decision to open up the ports 24 hours a day began to whittle away at the congestion at the ports, and there are indications that supply and demand are beginning to move towards a more stable equilibrium.

One consequence of this has been that inflation jumped to a two-decade high. According to the monetarists (including Milton Friedman), inflation is purely a monetary (and hence policy) phenomenon due to too much government stimulus entering the economy, a stance which has actually been repeatedly disproven once supply chain issues and demographics are taken into account. Inflation was a significant problem when Friedman was formulating his theories, but ironically the primary factor for that was the fact that the economy was growing faster than at any point in the previous century, in great part because of the dramatic rise of the Baby Boomers.

In most cases, inflation seems to be uncorrelated to fiscal policy but seems to be highly sensitive to disruptions in the supply chain. When goods can’t get to market, shortages arise, and prices go up. When the goods do finally get to market after that market has adapted to lower supply, then gluts emerge, and prices drop (usually via discounts). The same thing seems to be happening now. Electronics prices are high at the moment, both because there is still a relatively high demand (for entertainment purposes, if nothing else) and because a shortage of chips caused by both a chip fabrication plant in Japan burning down, because automotive chip orders dropping dramatically in the first wave of the Pandemic, and with the general snarl in supply chains.

New fabs are now coming back online, supply is beginning to overshoot demand, and by next Christmas, it is likely that electronics will actually be heavily discounted compared to current prices. This is happening across multiple sectors (including the energy sector, where oil prices are beginning to drop after hitting ten-year highs earlier in 2021), and as a consequence, inflation will likely start receding as a concern by Autumn 2022.

Ironically, this will mean that, by certain measures, the economy will start to cool down, though it’s important to remember that such a slowdown is relative to an annualized rate of nearly 5%, which is the fastest the economy has risen in decades. Put another way, the economy is basically recovering to a more sustainable rate than it has throughout 2021, which is good news given that, in certain sectors, unemployment at the moment is negative – there are more jobs open than there are people to fill them.

No alt text provided for this image

A New Labor Normal

In the last two years, a whole new lexicon has entered into the realm of work: Work From Home (WFH), Hybrid, the Great Resignation, Zoom Meetings, Zoom Meeting Fatigue, and so on, all indicating that work as we have known it has entered into the twilight zone.

I’ve made this argument before, so will keep it short here: We’re entering into a two-decade-long period in which labor will have more strength than it’s had in decades, and indeed, where the balance of power is going to shift from the money people (sales and investment) to the creative people (technology and creative). This has to do both with the changing nature of work – requiring more technological sophistication and more creativity and less need for large-scale monetary management – along with a demographic shift marked by a largely flat (or even shrinking) rather than growing workforce.

One of the central changes in work is that after two years, the likelihood that businesses will be able to demand that people return to the office is close to nil. Part of this comes from the willingness of people to jump to other companies that do offer WFH or similar flexible opportunities, a willingness that would have been unheard of three years ago.

Additionally, over the year, the number of managers who have embraced WFH and have adapted their management style accordingly is already larger than the ones who have pushed to move to the office. Many of those managers now retiring are ones who see the handwriting on the wall with regards to which type of management style will predominate, and are getting out with the best potential retirement packages.

This is going to accelerate other trends. During the last major era of outsourcing, corporations would typically outsource whole teams in a specific location, usually through a single vendor. What I suspect will happen now is that you’re going to see the shift towards multinational teams, where you may have people from the US, England (or increasingly Scotland), Amsterdam, Germany, Denmark, Greece, Romania, Nigeria, India, China, Japan, and Australia on the same team, moving increasingly to asynchronous work and hand-off meetings. Tax law is about to get a whole lot more complicated, and I wouldn’t be at all surprised if you see the rise of Cayman coordination companies in order to minimize exposure in that area.

You’re also going to see the true rise of the independent consultant, to a degree that hasn’t really been feasible before. In the US, for instance, independent consultants have typically been at the bottom of the stack when it came to tax advantages, which tended to favor “consultancies” that were literally no more than body shops, providing subpar wages and anemic benefits but that often had exclusive contracts with companies with regards to how prospective packages were advertised. That model, I suspect, will prove increasingly untenable, especially at the mid-to-upper level of competencies required.

You’re also going to see a point soon (if it hasn’t already happened) where technical staff may end making more in absolute terms than the sales-oriented staff at the same experience level. This situation will likely equalize by the end of the decade, but there are several key trends that support this: in general, the expected price on project costs (especially integration oriented ones) will go down as augmented intelligence becomes pervasive, while at the same time, the barrier to entry also goes down. At the same time, more and more companies are shedding their IT departments entirely in favor of cloud deployments, which pushes more of those whose skills would have been in an IT department into the cloud as well.

Furthermore, I anticipate that overall IT will likely face a few years in the doldrums (possibly to 2025 or later) as so many technologies that have been overhyped (including AI in general, along with machine learning and data analytics) begin to shift into infrastructure mode. It does not surprise me that suddenly Metaverses and Multiverses are everywhere, as it is my belief (see below) that these will collectively be integration and standards projects rather than truly transformative tech. These usually tend to be big in Gartner trough periods, when a given technology falls off the hype cliff, and there are whole clusters of these now slipping over the edge of the waterfall.

One other characteristic I expect of the independent consultant is that such workers are more likely to maintain multiple projects with multiple clients simultaneously. Work From Home I believe is going to make that more feasible, and is one reason that I suspect a lot of managers really would just as soon not support WFH – it means that they are having to contend for the attention of workers who would otherwise be limited from doing so because they can be more readily monitored in a work environment.

I suspect that the environment will likely be favorable for labor for some time, at the expense of profitability. I have been asked by a reader about whether those increased labor costs will reach a point where many businesses are no longer profitable. I don’t doubt that will eventually happen, but again, this is balanced out by the fact that the financial barriers for entry for starting a business will also be at historic lows. You simply need less money to start a business, but you likely will see less profit. Investors will have few opportunities to invest in blockbuster projects, and those will see lower returns as well. At some point that provides an equilibrium constraint, that will ultimately require fewer people as well. This is going to be a much bigger problem by the end of the decade, I suspect.

No alt text provided for this image

Electronic Currency? Yes. Self-Sovereignty? Probably Not.

Digital asset identity is going to be one of the big topics in the next year, because it underlies so many other technologies. Right now, the emphasis is on the notion of self-sovereignty, in which the assurance of identity can be established via some form of algorithm, typically one built on some form of blockchain. At the moment, this concept, big in Libertarian circles especially, is that if you can create a blockchain system, then governments, big nasty things that they are, are no longer needed to assure the validity of currency – in effect every person becomes their own self-sovereign.

The biggest exponents of such self-sovereignty are corporations that would love to become issuers of currency. Significantly, each such corporation would really prefer that you use their blockchain because it makes it easier to tie you into their specific economic ecosystem.

Not surprisingly, governments are really not all that keen to legitimize such fiat currency, because ultimately their power, including their financial power, stems from their sovereignty – indeed, the power to control its own currency is one of the most reliable indicators for how influential a given country is.

Countries have been moving to electronic currency for decades, and the banking systems of the world at this stage could not survive if the bulk of all currency was in fungible form (paper, bullion coinage, secured assets). If they moved to an identity system, then it would be an identity system that is advantageous to those banks, most of which exist at the sufferance of their sovereign home states.

In the next few years, I expect that most sovereign states will engineer their systems so that they are explicitly not self-sovereign, but will otherwise borrow much of the higher-order frameworks such as NFTs (non-fungible tokens) that currently are practically meaningless, primarily because I do not believe that zero-trust networks are socially feasible. At the end of the day, you need someone to sue, someone to take responsibility, and that lack of trust all too often translates into a lack of accountability.

I think the secret sauce may very well be in decentralized identifiers, which have many of the same characteristics as zero-trust networks but don’t have the self-sovereignty constraint on them. The idea here is that so long as you have credentials that are verifiable, the computational requirements that make most current ICOs so potentially bad as a foundation go away – you COULD have a zero-trust network, but such would be competing against trusted networks. An NFT in a trusted network I suspect will be far more meaningful because there is an implicit surety that could also be made explicit that ownership could be enforced in such domains.

No alt text provided for this image

Growing Pains for the Metaverse

About fifteen years ago, I wrote a short novel plus commentary for a software company, primarily as an analysis of trends within a fictional setting that took place about, well, now. That particular work has long since disappeared into the depths of time, and some things I got rather disastrously wrong, though I was actually more accurate than I had hoped. I anticipated frictionless transactions, the pervasive rise of the cell phone, Siri and similar agents, cloud computing, drones, hyperconnected environments, IoT, augmented reality, the problems inherent with fakes, the rise of avatars, and zoom meetings, among other things. I had my doubts about self-driving vehicles (I thought they might be around, but not widely adopted), and I’d touched on both work from home and the use of neural net programming.

I missed the pandemic (I figured one was possible, but I didn’t foresee the influence one would have), and I assumed some kind of hyperloop that seemed to be a going concern for a bit but lately has rather disappeared from the news. I also assumed that we’d be a bit farther along with 3D construction printing than we are, but we’re getting there – metals were the stumbling block, though those were solved in 2019.

I’ve started writing a new novel, tentatively entitled Quel, the French word for What, which I plan on publishing through this newsletter and will then publish as a completed manuscript once I’m done, probably on Amazon. I’m doing it for two reasons – the first to help provide use cases for what I suspect the metaverse will look like fifteen years from now, and the second to engage in dialog with others about where they see the same thing happening.

One thing that I’ve already realized – any discussion of the Metaverse today is likely to be a far cry from Hiro Protagonist, Neil Stephenson’s famous hero in the 1989 novel Snow Crash. This is not because I think that you couldn’t put together Stephenson’s world today, but because you couldn’t put together that world in a way that wouldn’t fragment irrevocably today. Nor would you necessarily want to. Snow Crash is dystopian in many, many ways, and if you’re aiming for a future world, you really want to make sure that what you’re targeting doesn’t such even before you begin.

There are many pieces involved – digital identity and sovereignty have already been discussed, contractual frameworks, descriptions of the four big As (Actions, Assets, Avatars, and Augmentation), security, coordinate systems, IoT (sensors and actuators), the role of graphs and machine learning, federation, frames of reference, cloud computing, data interoperability, the importance of time and narrative, and so forth and so on. Right now there are a lot of standards, but not a lot of consensus about how these standards tie together into a cohesive whole. This is even before there’s a katana anywhere to be seen.

My sense is that the metaverse is not going to be a thing but a process, and will ultimately subsume almost all of computing in one fashion or another by the time it becomes ready for prime time later this decade. Over the next year what I anticipate will happen is that you will see people jockeying for position, will see different organizations take different stabs at the same problem, then likely a meeting of minds towards the end of this year about how we move forward with what exists.

In the meantime, I’m expecting that the Metaverse term (and similar differentiators) will go in and out of vogue a great deal over the next couple of years, as different groups square off to claim their own particular slice of the virtual world. More than likely, where you’ll likely see the first real consensus will be with game companies that are looking to share assets across multiple gaming universes.

1641080000691

Make or Break For RDF, Turtle, SPARQL and SHACL

My recent writings on RDF, Turtle, and SPARQL are fairly abstract, even in the rarified world of cognitive computing. In some respects, RDF is old, a standard that was formed not long after XML became a standard at the start of the Millennium, and most people who learned about RDF at that time came away with an impression that it was a bizarre language that had no relevance to what was important at the time, and that it would never really go anywhere.

However, having seen Javascript emerge from a language barely capable of functionality to one of the most heavily used languages on the planet, I think it’s important to understand that the semantic web today is likely very different from the way it was 21 years ago, or even ten years ago. I also think that there are some profound changes to what you can do with the stack today that would have been inconceivable even five years ago.

I also believe that the SPARQL/SHACL/GraphQL stack is likely to represent the unification of all four formats – JSON, XML, RDF, and CSV – that large (enterprise and inter-enterprise scope) data systems are desperately needing, and may also contain the seeds to unify semantic and property graphs, IF it doesn’t get sideswiped by yet another OOOOH SHINY technology that solves someone’s immediate fetish for inventing a new coding language at the expense of long term interoperability.

I like SHACL … a lot. I’ve been in the data modeling space since my formative period as a programmer in the late 1980s, and I think that SHACL fills a hole that’s been needed since the introduction of SPARQL itself: a metalanguage for describing and abstracting out patterns in graphs, something that isn’t a formal logic system like OWL (which is powerful but overkill for most applications) but that is better suited for creating rules, validating patterns, and establishing constraints. The advanced form of the language also codifies how to use SHACL for creating named SPARQL functions, something that I see as fairly critical in taking the language to the next level.

SHACL + GraphQL makes it possible for those working with JSON to deal with RDF stores as JSON stores. This is important for two reasons – the potential space of JSON users is much larger than those that work with RDF, and because SHACL makes it possible to dynamically specify the shape of a graph, this also means that the constructed objects can be shaped in ways that static JSON stores cannot. Finally, and perhaps most importantly, such shapes can be used to shape mutational JSON structures that can then map back into RDF on the back end, transparently.

However, again, in order for this to happen, it means that the opportunity exists to push these into the next-generation standard, but the window of opportunity to do so, is relatively small – this year, perhaps next, before the pressure to find alternative approaches becomes too great. There are other potential standards – Open Cypher and GQL, for instance, both, have benefits to recommend them, though I am inclined to believe that both open up certain aspects of graph computing (graph data analytics and SQL-like queries) that do not have the same expressivity for reasoning as RDF could, providing short term “wins” but at the expense of a lack of longer-term capabilities.

No alt text provided for this image

Convergence or Chaos

On a somewhat related note, there is a growing realization outside of the deep learning sector that deep learning’s explainability (and its computational expense) issues may very well prove to be the Achilles’ Heel of the technology. That the technology is powerful is undeniable. For certain classes of problems – Natural Language Processing and Generation (NLP, NLG, or the umbrella term of NLU for natural language understanding) powers interactive agents such as Siri, Alexis, and a whole load of similar bots, the ability of neural networks to drive visual recognition in areas such as autonomous drone control and driving systems continues to be the most reliable mechanism for moving towards self-driving systems, and in general machine learning is becoming the preferred choice for classification when given a labeled system (what semanticists would describe as lexical ontologies).

The problem comes when data is sparse, is biased, or is expensive to compose, all of which are present in real-world problems. Sometimes, you need to have a graph of information and need to be able to infer from those graph relationships that can be derived from subject matter experts who can help build structures around the classifiers, who can reduce the overall cost of creating training data by indexing specific relationships a prior.

At the same time, machine learning solves one of the big problems inherent in semantics – the classification problem that makes it easier to identify and transform instance data into a coherent framework even when translating from one ontology to the next. If I (or a neural network) can identify a particular entity as a duck candidate, then it becomes easier to identify the characteristics that support or refute that characterization. Moreover, once the classification is made and verified, you can reason about that entity as if it was a duck. This particular conundrum may seem silly, but it is, restated only slightly, at the heart of master data management and identity management.

Similarly, data analytics is moving away from population analysis and towards Markov chains (and blankets) and Bayesian analysis, both of which can be thought of as statistical analysis on graphs. Bayesians have the advantage of working well with smaller datasets and are often of greater use when it comes to process mechanics. Given specific known conditions, you can use graph-based Bayesians to determine the likelihood of individual point failures, as well as cascading failures that are often very difficult to determine when working with whole population stochastics.

Graph embeddings are another form of mixing data science concepts with graph concepts, in which specific patterns are bound into models as embeddings. While one approach for encodings is fairly simplistic (attempting to encode a graph as a numeric signature) I suspect that a more fruitful area of inquiry will be the development of SHACL patterns that can then be associated with identifiers, which can in turn then be used with clustering and neural network systems to create reverse queries.I expect that the next year will see this idea reach greater fruition, as researchers begin to merge semantic ideas with property graphs, neural networks, and so forth. I also believe that while there will continue to be a core of developers that try to use neural networks to resolve higher-order logical problems, this is going to prove fruitless. Neural networks are great ways of solving perceptual problems, including visual classification, but logical systems are higher levels of abstraction that arise primarily as emergent phenomena that may influence how neural nets work, but can’t necessarily be captured within the context of one, regardless of how much data you use. I don’t believe that graph by itself is the solution either, but I do say graphs + Bayesians + emergent fractal phenomenon + neural is likely the right direction.

No alt text provided for this image

DataOps and the Post-Agile Organization

I first heard the term DevOps about five years ago, when I met an acquaintance of mine at a coffeeshop and he described how he’d been very heavily involved with a large software company’s Developer Operations efforts. One of the key distinctions of DevOps was that it took Agile thinking to the next logical step – creating a continuous integration environment in which organization, project management, coding, documentation, and testing were all largely automated into a single workflow, so that at any given point, you (or your manager or your client) could see exactly where you were in a project.

Since then, this continuous integration mindset has been extended to other processes, including machine learning (MLOps), design (DesignOps), and data pipelines (DataOps), among many others. What’s so significant about this is that these processes are essentially automating not just programming, but nearly all aspects of businesses. The NoCode/LoCode movement is yet another expression of this, as is remote process automation (RPAs).

I consider these all aspects of a post-Agile organization. By post-Agile I do not mean that Agile is not used, but that, for the most part, its been incorporated into the very software that we work with to an extent that the organizational processes are no longer as relevant.

This is going to continue to be the case, especially as organizations themselves become more distributed, asynchronous and geospatially agnostic. Marketing Ops is now all the buzz (surprise, surprise!) though this isn’t all that strange when you think about it. Marketing used to be the soft side of sales, but increasingly marketing has becoming a technical discipline, requiring an awareness of statistical theory, Bayesian analysis, semantics, and data modeling. I expect this will likely subsume other areas (HR is in the middle of its own transformation, and is in fact becoming one of the most graphlike parts of the entire organization).

9982550676

3D Printing Grows Up

Three dimension printing was just beginning to take off back in 2010, and at the time, its uses were largely limited to single-pass plastics. It would take another nine years to get to the point where metals, glasses, concretes, and other base stocks became staples of 3D printing, and this in turn is having a profound impact upon manufacturing and construction. For instance, in 2021, a 3D printing-based construction company received a permit to print a 3D house in Florida, using concrete as the “ink”.

Similarly, metal-based inks (and corresponding projects) are becoming increasingly common, a process that was only really resolved in late 2019. In the cases of homes, such a process can construct inexpensive houses within a few weeks, rather than the months typical of more traditional construction methods. The most immediate needs involve the creation of new homes in the wake of hurricanes and other destructive events, but my sense is that 3D printing is likely to become so deeply embedded in manufacturing and construction that it will likely replace current methods by 2035.

I also recently had an interesting conversation with a nephew of mine, who works at a laboratory literally building viruses. This too is a form of 3D printing, but at the molecular level, with such viruses then being used to perform tests for specific biological agents. It’s worth noting, and something that I think gets lost in the discussion of viruses and vaccines, that the ability to create such vaccines in months would be impossible without this form of 3D printing. If Covid-19 had taken place two decades ago, it would have taken five years or more before vaccines were available (if ever) and would have cost hundreds of billions of dollars to achieve. As awful as the virus has been, the timing was fortuitous enough that multiple solutions could be created within a year.

The one thing these processes have in common is that they start out with a virtual model that is then printed (or serialized) into a physical world object of some complexity. This is in fact the logical evolution of the digital transformation process that I think will be ongoing throughout this decade, a process that’s the mirror to digital twins which I’d describe as realization – the process of modeling something within a computer first and keeping that modeling process largely virtual as you build corresponding virtual tests, before realizing such models into physical representations.

What’s intriguing here is that the realization of models is not even an end state – it’s simply a byproduct. If you can print something that has a queryable identifier, then the physical and digital twins can exist together, indeed with the physical instance creating multiple virtual shadows that represent different expressions of the same fundamental model. Printing a house, publishing a 3D generated movie, building molecular-scale sensors, or tracking drone units in real-time are all just variations on the same fundamental problem of creating reflections of reality in the virtual world.

No alt text provided for this image

Optical and Quantum Move Out of the Lab

Optical computing has been around for a few years, primarily via the medium of optical rather than electronic cables and routers for networking, but in the last year, optical computing is increasingly moving into more compute-heavy operations. The benefit that optical computing has is that a single beam of light can encode a broad spectrum of information.

The problem that it faced was that such light beams needed to be converted into more traditional forms of encoding (primarily magnetic moments on a medium) in order to be persisted. That’s changing as recent advancements in materials engineering are making it possible to “freeze” photons of light so that they can be stored without conversion and can be queried without problems of decoherence. Because light is bosonic in nature, you can store multiple superpositions of information in this manner, making it possible to create very information-rich data structures with minimal cost or scalability issues. While I do not expect optical computing to be that critical in 2022, I do expect that you will start seeing true commercialization within the next couple of years.

The same thing can be said for quantum computing, which also takes advantage of quantum superpositioning, albeit more in the fermion realm. In 2021, researchers were able to create “circuits” consisting of thousands of quantum qubits, whereas before, any quantum computing was done across only a couple of dozen such qubits before decoherence became a major issue. This ability, which takes advantage of sophisticated error-correcting algorithms, means that real-world quantum computing systems may be deployable within the next few years.

My expectation is that these two technologies will continue to remain specialized for a while, but especially with quantum systems, the real value will come in the ability to create entangled qubits that can retain their entanglement and as a consequence form the foundation of communication systems that are more or less independent of distance or intervening material. Note that this doesn’t violate Einstein’s theory of relativity, for instance, but it does mean that an entangled “transmitter” could send a signal from the Earth to the dark side of the moon (or, perhaps more relevantly, could send a signal from a naval base to a submerged submarine and vice versa securely).

No alt text provided for this image

Fusion In Three … Two … One …

I expect fusion fever to start heating up in the coming year. There were several key advancements in various parts of the fusion puzzle, including the introduction of pulsed lasers with slightly chaotic streams to create counter-vortices in energy flows to keep fusion containment bubbles stable, the introduction of new kinds of magnet switches that more efficiently target the tritium fuel, and alternative strategies for laser bombardment that have pushed energy yields for fusion above 1.0 for significant periods of time. This will all come together with the big ITER fusion reactor starting up in France in 2024, along with multiple secondary projects that are continuing in parallel.

Fusion and Artificial Intelligence both have been described as technologies that are “only ten years away” and have been for the last sixty years but in the case of both, we are in fact getting within striking distance of achieving that goal now. We know fusion works – the physics are not in dispute, but the engineering challenge of controlling fusion as an energy source has been more difficult than imagined. With ITER, I suspect that we’ll finally get past even the engineering aspects and move towards the commercial production of fusion-based energy within the next five years.

In a related area, I think that liquid sodium Thorium fission reactors will end up seeing regular deployment globally within the next few years. Thorium reactors are safer than conventional uranium reactors, have no long-term radioactive half-life isotopes, do not meltdown, and are far less expensive to build.

I do not believe that any one power source is sufficient for our civilization. Indeed, one of the biggest factors that I see in energy futures is that we’re moving to a true-multimodal approach to energy, whereby beefing up the electrical grid (part of the Infrastructure package that was recently signed into law by President Biden in the US) to handle multiple energy inputs and outputs over a smart grid, we will be far less susceptible to disruption. Industrial IoT (IIoT) will be a big part of this process, making the grid capable of handling everything from petroleum byproducts to kinetic energy systems (hydro, wind, tidal) to solar photovoltaic and photosynthetic systems to fission and fusion. Again, I see these ultimately mediated by graph-based systems.

No alt text provided for this image

Drone

In music, a drone is a single, usually deep note that it is sung primarily to provide a foundation for chord progressions. In Islam, the drone is called a muezzin (مُؤَذِّن), and is an official (not necessarily a cleric) who performs the weekly call to prayers (Adhan) on Fridays, or at the beginning of the day (the Salat). The drone is also the worker bee in a hive, notable primarily by its buzzing sound. It is this sound (of small rotors cutting through the air) that gave the flying robot its unique name.

The robotic revolution is not in fact taking place with humanoid replicants walking among us like Maria from Metropolis or C3PO in Star Wars (though these are beginning to become significant in Japan, a country that has had a giant Mecha fetish dating back decades). Instead, the real revolution seems to be in flying drones, that have featured heavily in everything from non-flammable fireworks displays over stadiums to the latest must-have toy of television news stations to police surveillance drones that are creating extra sets of eyes up in the sky without the expense or potential danger of flying manned helicopters.

I believe that we’re on the cutting edge of a revolution in drones that determine how they are used, how they are regulated FROM being used, and who has the right to use them in what circumstance. What makes them so intriguing is that they have to be aware of their environment in a way that current terrestrial bound vehicles don’t. This means that they are increasingly becoming the test bed for real-time AI systems. In the process drones and drone AI are setting up the foundation for both self-driving vehicles and (yes, finally!) airborne vehicles, which are essentially drones capable of carrying human-sized loads.

Currently, the primarily applicability of drones seem to be in very definitely niche areas – photography and videography, difficult to access inspection of infrastructure (bridges, roofs, etc.), land use surveillance (such as for wildlife preserves), some law enforcement, and increasingly, traffic assessment. There are many, many other potential use cases, from delivery services to private security to search and rescue, but these ultimately face regulatory pressures, and especially with respect to privacy and safety force some uncomfortable questions that generally haven’t been answered yet. How do you keep people from getting hurt by drones (or worse, keep drones from being used as weapons)? How do you keep drones from crashing into walls or roofs? What prevents drones from getting tangled in power lines? Where do drones fit in the overall surveillance picture, not just from governments but from media and corporations? What about noise?

I believe that 2022 is going to be the year where these conversations take place, setting into place regulatory frameworks within the next two to three years. In many respects, these discussions also foreshadow the ethical debates about other robotic entities through the rest of the decade and beyond, especially including augmented reality, other autonomous vehicles, and the use of drones in war.

Final Thoughts

I think that 2021 may very well go down as the year when the risks from climate change finally diffused throughout the political spectrum. Politicians who have been fairly adamant in their denial have quietly been changing their message, not endorsing it as “climate change” but tacitly acknowledging that something unusual is going on and something needs to be done. What that something that needs done is, of course, is still very much open to debate, but all too often politics comes down to acknowledging that a problem exists in the first place.

I don’t have a lot of deep insights into new computer trends this year, beyond my own small slice of it. We’re moving to the cloud, becoming temporarily asynchronous, geospatially distributed, and focused less on doing the tasks and more on guiding the software to do so. I also think that while it can be argued that specialized AIs exist right now (software that is able to use experience to improve itself in its given purpose) we’re still some ways (years, maybe decades) away from either generalized AI or sentient computing, which I see as being very different things.

So, that’s 2022. Let me know your thoughts about what you see coming down the road. Oh, and no refunds. You break it you buy it. Just so you know.