Home » Uncategorized

High-performance computing’s role in real-time graph analytics

  • Alan Morrison 

A podcast with CEO Ricky Sun of Ultipa

High-performance computing’s role in real-time graph analytics

Image by Gerd Altmann from Pixabay

Relationship-rich graph structures can be quite complex and resource consuming to process at scale when using conventional technology. This is particularly the case when it comes to searches that demand the computation to reach 30 hops or more into the graphs.  

Moreover, a key benefit of graph technology is ease of large-scale integration. When it comes to analytics, bringing all the relevant information together and processing it quickly is critical to effective discovery.

For that reason, high performance computing (HPC) methods that enable the processing of over a trillion floating point operations per second have been desirable for efficient, large=scale  enterprise graph analytics. In 2012, for example, back in the early days of data lakes and rising demand for big data analytics, supercomputer provider Cray launched a subsidiary called YarcData that targeted the enterprise market for graph DBMSes. 

YarcData’s Urika in-memory appliance available in 2013 featured a maximum of 512 terabytes of contiguous random access memory (RAM)–plenty of space to load entire large graphs using a range of different algorithms and visualization techniques. 

Urika’s first users came from the knowledge-intensive verticals that needed to work with giant graphs for discovery purposes, including financial services (fraud detection, for example), pharmaceuticals (drug discovery) and cybersecurity. Customers paid upwards of $200K for each appliance. The high cost of the appliances compelled YarcData to offer a subscription pricing model as an alternative to buying the appliances outright.

HPC has moved up the performance curve and down the cost curve since then. Today,  HPC Graph systems provider Ultipa claims to deliver real-time graph analytics at scale using high-performance computing methods cost effectively. Ultipa’s proprietary technologies include these:

  • Hybrid Transaction/Analytical Processing (HTAP): HTAP, a term Gartner coined in 2014, refers to scalable transactional processing ability in addition to analytical. Ultipa’s HTAP offers both horizontal (distributed across clusters) and vertical (per server) scaling. 
  • High Density Parallel Computing (HDPC): HDPC is Ultipa’s term for its patent-pending concurrency capability, which offers near linear scaling. As the number of instances grows, so does the scale. 

During this podcast recorded in mid-January, Ultipa CEO Ricky Sun shared a number of insights about how large banks are using the company’s HPC graph systems for real-time liquidity risk evaluation. Silicon Valley Bank’s collapse in March 2023 underscored why more banks are using HPC graph technology for their risk assessments now. 

Hope you enjoy the podcast as much as I did recording it.

Podcast with Ricky Sun, CEO of Ultipa