Current Tech Challenges:
In a challenging COVID affected economy, Gartner has predicted that global spending on IT will decrease by 8% this year. This only increases pressure on CIOs and tech leaders to strategically leverage available enterprise IT resources more effectively. In this age of digital transformation, how can companies still deliver iteratively better performance and stay ahead of competitors? AI may be the answer today, but as Enterprise Data keeps growing as evidenced by recent research on Data Gravity Index (DGx)[PL1] quantum computing could be tomorrow’s.
AI may be the answer today but by 2024, enterprise data levels in the Forbes Global 2000 are forecasted to reach quantum computing levels of data - 8.96 exaFLOPS of compute power and 15,635 exabytes of private data storage annually. The recent research on the global Data Gravity Index (DGxTM) has highlighted the exponential growth in data which could point to quantum computing being tomorrow's megatrend.
The three stages of digital transformation
Digital transformation doesn’t happen all at once, in fact Patrick Lastennet, Interxion’s Director of Marketing & Business Development (Enterprise) sees three distinct waves:
- Consolidation of assets
Taking stock of current IT infrastructure, all assets: strategic and operational - including data centres. From this comes a consolidation and a roadmap that lays the foundation for adoption of technologies like cloud computing and software defined networking.
- Restructuring networks to enable distributed applications and manage connectivity
Once a roadmap has been put into place and goals set, businesses then need to think about the underlying network to be put in place, in order to enable distributed applications that run in hybrid environments, both on cloud(s) and all the way through to colocation and on-premise. It’s key for businesses to assess how they can best re-architect their connectivity before they get locked-in to one cloud provider.
- Data driven innovation - building new digital supply chains
Driven by business is the third wave, how teams start enabling future applications and workloads, like AI, IoT or even in the future, quantum computing. In theory, IT infrastructure comes first, then network and finally innovation, but in practice these waves rarely happen in order. Pressure to outperform competitors and deliver more efficiently sometimes means innovation sneaks in before wave 1 or 2.
For technologies like AI specifically, shadow AI can become a real problem for business trying to jump ahead. The lowered entry costs for AI through self-service cloud providers makes it easier for data scientists to dip their toes into machine or deep learning but isn’t feasible as a sustainable solution. This can lead to fragmented and inefficient workloads, siloed across locations and teams, not ideal for any business in 2020 and 2021
In terms of building a wider, future-proof IT strategy enterprises as a whole need to look holistically at their infrastructure before they can build a ‘Centre of Excellence’ in AI. Other hurdles include quantifying the benefit of AI, model debt or underused intellectual capital - which can be hard to define for businesses. To reliably scale an underemployed AI solution, businesses need to also overcome siloed teams and resources to bring together IT functions, working together to enable AI innovation.
AI is now
The opportunity for AI is here now, with 84% of executives saying they won’t achieve their growth objectives unless they deploy AI and a staggering 75% believe they risk going out of business in 5 years if they don’t scale AI.
"AI is adept at finding the needle in the haystack, and distilling oceans of data into actionable insights that put you ahead of your competition." Tony Paikeday Director of Product Marketing, Artificial Intelligence and Deep Learning at NVIDIA.
AI has crept into almost every facet of our lives, with the first breakthrough applications including image recognition and search back in 2017 as well as recent strides in voice recognition and natural language processing. Businesses that sell to individuals, not companies, are already reaping the benefits with personalised recommendation algorithms with monetisation potential as they learn exactly what people are looking for. Part of Interxion’s growth as a colocation provider has been driven by platforms using data centres for AI inference, running trained models in production.
Not all AI is made equal
A key differentiator in AI usage is when a business builds bespoke models using their own data or uses AI ‘off-the-shelf’. Within each sector there are user cases for both, recognising the different benefits is key for businesses looking to make the most of AI capabilities. ‘Off-the-shelf’ AI refers to models and algorithms that have already been developed by a platform provider, for example, speech to text from Google Cloud which would benefit a business looking to automate a high volume of speech to text conversions, for example within a call centre. However, this level of automation, however efficient, won’t necessarily differentiate a business from its competitors.
“Businesses who really want to lead and differentiate themselves have to build their own models with their own expertise and their insight. To do this, combine your ideas and proprietary data, with high performance computing provided by the likes of NVIDIA GPU, along with the right data centre infrastructure to enable innovation.” - Patrick Lastennet, Interxion’s Director of Marketing & Business Development ( Enterprise)
Building Infrastructure for now and the future
With big AI ambition comes the requirement for scalable and robust infrastructure that acts as a springboard platform for business growth. Being on top of the entire MLOps cycle is the goal for a CIO/CTO, with two key considerations: power density and connectivity.
For both training and inference, deep learning relies on GPU resources to work with the vast amounts of data needed. Over recent years, the transistor density in GPU chips have increased exponentially allowing for more computational power leading to increased power efficiency. However, as workloads increase, robust data centre infrastructure is essential to support the power density needs for deep learning. Power density per rack has been slowly increasing in data centres, with a shift towards high density (15kW+ per rack) to support technology like AI.
With the need to aggregate vast amounts of structured and unstructured data from multiple internal and external sources, comes a connectivity requirement.
CTOs need to consider how to combine building data lakes that feed into the training of models, and how this will be collected and managed. Data usually comes from multiple sources, especially if you are collecting data from connected IoT devices (for example, a mobile phone or a sensor in a car) requiring connectivity to the Edge, where location is key.
Another potential data source is the cloud, where data scientists would benefit from the flexibility of connecting to the cloud providers from where they are based, likely in a metropolitan area like the city centre.
“Connectivity requirements are often overlooked in AI, it’s absolutely key in terms of being available to support your AI workflow” - Patrick Lastennet, Interxion’s Director of Marketing & Business Development ( Enterprise)
Once a business has built a data reposit, they’ll want to compute to train the model, ideally as close as possible to their data repository for efficiency, in the same data centre. With some workloads, businesses may need to flex to the cloud for punctual bigger requirements driven by specific customer projects, making colocation the ideal solution for enterprises with AI strategy, as it fulfils both the power density and connectivity requirements. But now, as AI solutions are already being implemented and scaled by businesses who are keen to differentiate themselves, what is the next step?
Quantum is next
The next frontier for IT & AI is the leap into quantum computing, spearheaded by companies like Google and IBM. Operating in a completely different way to traditional computing, the hype around quantum computing claims that this will open up modelling and calculations that just aren’t possible today. Not just a matter of increased efficiency, quantum computing promises to simulate large complex systems with multiple variables.
Quantum computing works by using qubits, a quantum mechanical system that exists in two states just like a binary bit. The difference, however, is that a qubit isn’t just on or off, it can exist as both or somewhere in between (superposition). Due to the quantum properties of a qubit, it can also participate in entanglement, where two or more qubits have an effect on each other's states in a way that can be used to calculate complex models.
In scientific research, quantum computing has the potential to model how drugs interact with our bodies on a molecular level. In the future, when this technology becomes more widely available, will there be viable business applications?
A potential application for AI in financial trading is a possibility, with quantum computers being able to better account for unintuitive relationships between inputs to deliver a hyper accurate predicted distribution, for example, across financial transactions and market patterns.
However, quantum computing at scale seems to be a way away with many instances working with a limited number of qubits or only running for short bursts. A quantum setup is highly sensitive to its environment, meaning that factors like temperature and voltage need to be tightly controlled like they are in academic laboratories. Businesses shouldn’t expect larger scale, commercial quantum computers for a few years yet. When this technology does become more available, it’s likely that data scientists will use a combination of ‘traditional’ computation & cloud-based quantum workloads, flexing between what is the most appropriate for the task at hand.
The implications of quantum computing for AI and wider technology have huge potential in the future, but AI is already happening now. For many businesses, the turning point for scaling up AI operations is imminent, as it's only a matter of time before their competition deploys their own solutions.
With an uncertain future, businesses need to be poised and ready for IT opportunities that can transform performance, bring them closer to customers and keep them ahead of the competition.
A robust, agile & secure IT infrastructure is the first step to enabling IT innovation at scale. Here at Interxion UK, we have several partners we work with to best service AI innovation at scale for businesses looking to differentiate themselves in their sector. We work with NVIDIA, powering their GPUs with purpose-built data centres, designed for maximum power density. We also work with solution provider, Scan UK, who help the enterprises take advantage of NVIDIA GPUs through distribution and managed services.
Core Scientific is a US based startup that we work with in the UK to provide AI-as-a-service through their on-demand platform and tools. With their Plexus™ service, data scientists can easily manage workloads with infrastructure dedicated to them in colocation, instead of relying solely on public clouds. In both our Brick Lane campus and Cloud House, we offer the opportunity for companies to test drive the latest NVIDIA systems, like the DGXA100.
Build your hybrid IT architecture with Interxion UK, where secure data flow means data gravity challenges won’t be a barrier to agile IT management - enabling innovation at scale. We’ve built a hub for connected communities of interest, colocation that drives strategic cross-connects to trade markets, partners and customers.