Video: Keynote | Duration: 3664s | Summary: Keynote | Chapters: Welcome to VAST (75.795s), Welcome and Introduction (159.455s), VAST Data Platform (237.78499s), AI Infrastructure Evolution (351.63498s), AI Operating System (468.97998s), Platform Evolution (597.60004s), Customer Success Stories (944.105s), AI Data Pipelines (1169.2001s), Pipeline Breakdown (1396.9701s), VAST-NVIDIA Partnership (1671.1951s), AI Agents & Partnerships (2524.5s), Welcome to Cosmos (3175.6702s), Closing & Next Steps (3421.6199s)
Transcript for "Keynote":
VAST data is in the arena of dealing with huge amounts of data, which is probably the most important component of this AI revolution. The more we all dug into it, the more we realized how incredible company you have been building. We all heard our company has over 9,000,000,000 in valuation. Fast data has been architecturally focused on getting the most efficiency out of everything they do. Working with the vast team allows us to bring innovation to our customers faster than anybody. We've sold more than a $1,000,000,000 worth of softwares. We've been growing at about 2 and a half to 3 x year over year. We've Congratulations. I know the sky's the limit for you, and that's fast. Fast. Fast. Fast. Fast. Fast data. Good morning, afternoon, and evening to everyone joining us from around the world. I'm thrilled to be your host today as we embark on a journey through the world of AI and data with some of the brightest minds and most innovative organizations leading the way. Today is no ordinary event. This is a moment where AI thought leaders, innovators, and builders have come together to reveal how they're shaping the future. You're gonna hear about groundbreaking new technologies from vast data with NVIDIA, thought provoking industry insights from Jensen Huang, and how some of the world's leading enterprises, including HSBC, ServiceNow, the Chan Zuckerberg Initiative, and India's OLA are transforming the way organizations deliver and derive value, leveraging the power of AI. What makes today so special? We're not just talking about AI. We're gonna show you how AI is transforming industries, enhancing business capabilities, and setting the stage for a future where data drives everything. The announcements and discussions we've curated will leave you not only inspired, but armed with the knowledge and tools to make an immediate impact. So without further ado, it's my pleasure to introduce VAST Data's founder and CEO, Rennen Halleck. Today, we're here to talk about something big, something that's going to change the way the world works with data. At VAST, we continue to rethink, reimagine, and reshape what's possible. Last year, we introduced the vast data platform, and it wasn't just an incremental improvement. It was a leap forward. We built Beyond Storage to become the full stack software infrastructure layer, the foundation for the next generation of AI, enabling AI to operate at its full potential. We introduced not just the vast data store to provide fast and resilient access to growing amounts of unstructured data, but also the vast database to bridge the gap between unstructured data and giving it structure, bringing order to chaos. More recently, we delivered the vast data engine, which brings the platform to life, making everything run on data driven triggers and functions. Indeed, what began as a storage system is now the operating system for the AI age, an operating system not within a computer or even within a data center. Through the abilities of the vast data space, it spans the world, connecting data across continents, powering the next wave of innovation. We built this new data because AI is evolving, and it demands more. AI is moving from research and experimentation to something that touches every part of our lives, and traditional infrastructure can't keep up. That's why we built a completely new architecture, one that breaks free from the limits of performance, scale, and complexity. With our disaggregated shared everything architecture, we're delivering a system where data and computation flow freely without bottlenecks across entire organizations across the world. As a result, the largest AI clouds, the most cutting edge model builders, and global enterprises are turning to VAST as the foundation for their future. Today, AI is beginning to mature, transitioning from research centers and LLM builders into multimodal, into AI clouds, into the enterprise. Last year, there was a clear distinction between training and inference, and between AI as a whole and traditional enterprise workloads. Those lines are now beginning to blur. As we see more images and video and sound, the weight of infrastructure shifts from compute intensive to a more balanced mix of compute and data. In the early days of the Internet, search engines ranked websites based on links, then people began interacting with these search engines, and we learned that human input was key. The same thing is happening with AI. Reinforcement learning from human feedback is transforming the way AI learns from every conversation. An autonomous car learns from every human touch. What used to be separate, training, fine tuning, inference has now become one continuous loop. AI models are no longer just built. They grow and evolve. To power that evolution, we need a single platform that can handle both the centralized, large scale, and high throughput of training, as well as the resilient, low latency, distributed nature of inference at the edge. The vast data platform breaking the trade offs and enabling 1 universal platform across training, inference, and traditional enterprise has become more crucial than ever before. There is no one better than HSBC, one of the largest enterprises in the world, to tell this story. We're gonna be faster. We're gonna be easier to work with. We're gonna be proactive where traditionally we've been reactive. The possibilities are nearly endless. There is probably no bank in the world with the data availability that HSBC has. We're in 62 markets across the world. We're one of the biggest payments providers in the world, biggest trade bank in the world, and arguably, you know, full scale retail, commercial, and global banking franchise. This technology is gonna allow us to access data that we've never been able to get to in the past. I like to think about AI being the operating system of a bank. If we look today, banks are fundamentally built around people. And while there'll always be a role and this is going to be a partnership between machine and human, I think it is going to flip. And I think AI is effectively going to be that operating system that underpins how a bank works and interacts with its customer, supplemented by humans who are better versions of themselves. We've got to balance that though with safe, ethically, and compliant technology. And the investment we need to make in that infrastructure to be able to capture that data, provision that data, and use that data safely, underpins and is fundamental to how we use this technology in the future. There are lots of opportunities for banks to go into nontraditional revenue streams, but an obvious revenue stream for us would be to go into the information services business to provide our customers insights, not only about their banking, but their business model, how they compare to their competitors, how to expand internationally, how to improve their working capital cycles. These are all things we do today, but this technology will allow us to democratize that type of insight and capability to ultimately make our customers more efficient and more productive. It's good for them, and it has to be good for us. 12 months ago, we unveiled a revolutionary data platform concept that put data at the center of AI powered discovery. Now a year feels like a lifetime. And since the introduction of the vast data platform, we've witnessed an absolute explosion in the data collection and curation that's powered by new AI applications. Now datasets are not only getting exponentially bigger, but the machinery of AI continues to grow where vast systems are now the data foundation for $1,000,000,000 computers that are powered by tens to 100 of thousands of GPUs. Data scale and fast access have become a critical requirement for the thinking machines of tomorrow, and vast revolutionary disaggregated and shared everything architecture is now at the center of the AI factory. And while our platform continues to scale, it also continues to evolve. The vast data engineering team is now hundreds of engineers strong, and they shipped tons of code this year to enhance the system's scale, its capability, and its security. The breadth of the platform is why vast systems have now become the storage and analytics infrastructure of choice for data driven enterprises and cloud service providers that serve them. The sum of the platform's features is far greater than its parts. And when we unveiled the vast data platform in 2023, we painted a picture of our idea of a thinking machine that broke fundamental trade offs of storage, of database services, and real time data computation. Today, we've delivered on that promise in a number of dimensions. Exabytes of the vast data store have been deployed to break trade offs of performance, of capacity, of simplicity, and scale. The data store is the data foundation of the system that allows customers to use the system to store everything from home directories to container stores to scratch volumes to high performance object storage buckets and even data protection volumes. Our parallel architecture combines with a game changing storage efficiency algorithm that allows us to establish the data store as the universal storage platform for all of your data. The vast database continues this trend by breaking trade offs of transactions and analytics. The database blurs the lines of database management systems by providing the parallel ingestion performance of the world's fastest event streaming infrastructure with the query performance of the world's fastest columnar data warehouse and the scale and the cost of a data lake. Now we've been working with customers on the VAST database for about a year now, and we've realized that we've solved for critical IO bottlenecks that the data science industry has been ignoring for over a decade. This enables VAST to introduce up to 90% TCO reduction versus other data platforms. More importantly, by building the world's first parallel transactional data warehouse, we're challenging fundamental application pipeline paradigms as we embed a parallel event broker that can ingest millions of events per second directly into vast tables, making it possible to correlate real time data streams against all of an organization's analytics archive. What we've done is eliminated the classic data observability gap that is caused by independent event buses and data lakes and complex batch ETL processes. Now the data engine is now also shipping to early customers. This is the logic of the vast data platform designed to bring data to life and enable real time computing on structured and unstructured data flows. The first release features a native SQL engine that's written in c to provide native analytics services against vast database tables. Today, we're now also announcing the first version of VAST OS that enables customers to their own Python functions to the system with built in native eventing infrastructure. Customers can create their own event triggers that call functions in the system that can enrich, analyze, and transform both structured and unstructured data streams in real time. We've not just unified files, objects, tables, events, and functions. We've also unified namespaces that organizations deploy to connect all of their data centers on premises and in the cloud. The vast data space is designed to break the trade off of global data access consistency and access performance across these data centers where fast access can be enjoyed by users wherever they get on a vast cluster. The Dataspaces are hybrid cloud data management system that employs intelligent prefetch algorithms, pipeline parallelism across the wide area network, and decentralized lock management to make reads and writes fast regardless of where your data natively lives. Putting these two together, the vast data engine and the vast data space provides the ultimate flexibility, allowing our customers to make it possible to divide data gravity by easily sending data to compute or sending compute to data all by automating the execution of functions at the location where they would be most effectively performed. Now these capabilities are core to our mission of reimagining the idea of a data platform in the age of AI, And they are also now showing tremendous value within customer environments like the Chan Zuckerberg Initiative, who's using the vast data space to interconnect all of their bioinformatic computing data centers to simplify and accelerate their mission to rid the world of disease. As well as at Ola, India's leading next generation consumer mobility and AI infrastructure company. Ola is leveraging the vast database to realize tremendous analytics pipeline acceleration, which has a direct impact on their ability to minimize infrastructure sprawl and makes it possible to process complex data streams in real time. Now let's hear the stories directly from them. Data is the absolute key to everything. The AI work that we're doing is part of moving that forward in eradicating disease by the end of the century. Our aim is to create a different paradigm for AI. In India, you know that, we have more than 800,000,000 users, and, we have more than 20% web traffic. But, we have hardly any kind of AI infrastructure or compute infrastructure which is actually truly Indian. So we see this as a requirement or more like a Sankalp which we have taken that we want to make India self sufficient and bring the AI revolution at the right cost point with all kind of tools which is required. When Chan Zuckerberg initiative was announcing how we were going to create one of the world's largest research clusters of GPUs and advanced AI work in the name of supporting work in the virtual cell project. It's so exciting because of we're starting at kind of an early level and having to scale up really fast, and that is an area where vast data comes into play. We are aiming towards building our own chip. We are building our own cloud infrastructure, and we are also creating apps which are required to create that ignition where people can start building their own tools and, applications. So by creating this whole ecosystem, we believe we can make India an export hub of whole AI, and AI requires a large high speed, data storage systems. And VAST have been very important partner to us because they have provided that kind of infrastructure. When you think about how and where we store the data on the GPU cluster, how we can feed the GPUs at high speed. This is a huge advantage for us, which is absolutely critical for, empowering this at scale. We are working closely with, VAST to build high speed analytics and data crunching systems on which we are building a new way of, spark clusters. We are building secure data lakes. We are working in, building high speed platform which can allow us to build cost efficient foundation models. It's flexible, scalable, able to use the data where and when we need it. Once we have this in place, that research and the community built around it will just grow dynamically because it's a virtuous cycle of research that we're empowering. We are going for almost 1 gigawatt of data center, plus we are building our own AI accelerators also on which we are closely working with VAST to create the right kind of performance because we are planning to build large clusters in India, which will be able to train, the models which are available in any part of the world, not only for us, but also for the customers. It will drive growth and expand the universe of research faster and farther than before. So our whole infrastructure is built around this whole notion of scaling this for us, but making it simpler for everybody else to use. And this is going to be so powerful. Ola and CZI are powerful examples of AI driving real value within some of world's most pioneering organizations. And while the AI tools of today are already extremely powerful, the world's foundation model builders are scaling up to build vastly more capable AI that will deploy tomorrow. To achieve superintelligence, these foundation model builders require 100 of thousands of GPUs to process on exabytes of data. In Memphis, a team of mighty AI researchers is building one of the world's most powerful AI supercomputers, a system called Colossus. XAI, led by Elon Musk, is using the machine to pioneer all new fields of AI in order to help us better understand the universe. Their frontier language model called Grok blends intelligence with personality to make AI approachable for the world. Today, it's an honor to announce that Grok is being built on the VAST data platform. Working with the XAI team has been a real privilege, and the work that we do at extreme scale will make VAST systems better for our broader customer community as they too start to scale up. The vast data platform introduced the idea that a machine could make models more intelligent by training on data and then inferring on data and then adding new data and model feedback to make models even more accurate. This loop of training and inference is the basis for a new recursive computing paradigm that we see defining the future of intelligent applications. Now a year later, we have a very clear picture of today's AI pipelines. 1st, raw unstructured data is ingested from a variety of sources. 2nd, data is then prepared using modern data engineering tools that can do things like query on data frames. Then well curated data is presented to GPUs for AI training and model fine tuning. And once a model is quantized and ready for inference, organizations then need to capture all of the prompts, the responses, and the feedback for model fine tuning and for regulatory purposes. Now from a systems perspective, this pipeline needs an event bus. It needs an object store. It needs a data lake, a file system, a runtime, and a data warehouse. With the vast data platform, it's now possible for the first time to consolidate all of the elements of this AI data pipeline onto a single, scalable, and simple platform that supports files, objects, tables, streams as high performance interfaces and functions and triggers to move data through the AI pipeline as 1 hyperscale data flow computer with one notable exception. AI models are getting more intelligent every day, but even the best models have their limits. Since models are only periodically trained and fine tuned, they're never really current. And we now need systems for LLMs to retrieve real time data from in order to enable AI systems to have real time data awareness. 2nd, enterprises often require access to specialized data that should not be fine tuned into an AI model either because it's too much data to train on or in cases where their proprietary data cannot be mixed into AI models for purposes of security, privacy, or regulatory compliance. Retrieval augmented generative AI or RAG solves for this information gap by allowing AI agents to become intelligent interpreters of enterprise data. They're capable of understanding query context. And using natural language, they can then go and reference massive data stores that have been indexed using advanced techniques like data vectorization that allows for AI models to easily retrieve answers by leveraging a semantic understanding of the data that is also determined by AI. That's right. AI is now being used to enable other forms of AI. Having said that, as we look out at the landscape of different approaches to AI retrieval, what we're realizing is that modern approaches to building rag pipelines take us very far from the vision of being easy and secure for real time AI. Now allow me to break down the pipeline components to illustrate where things really break down. If you look at a standard retrieval pipeline, first, a user or another AI agent will prompt an AI agent with a natural language query. Large language models are able to understand this query context and can either serve an answer from what the model's been trained upon or they'll go retrieve data if it cannot itself produce an answer with a high enough confidence score. Then depending upon the query, it'll go seek an answer from a vector database where vector embeddings have been created, stored, and indexed to articulate the context discovered from chunks of unstructured data. Or an agent will either convert text to SQL to go get an answer directly from an enterprise data warehouse if that answer can be well structured and warehoused. These embeddings are a form of semantic definition of unstructured data for data types such as video or free text that have been contextually understood by something called AI embedding models. And that can then be searched through quickly using similarity search and knowledge graph search analytics tools. Now the challenge is as we take 2 steps back, what we see is a fundamental set of constraints that limit retrieval workflows from realizing their full potential. Number 1, these systems are anything but real time. For unstructured data systems, since there's never been an enterprise file and object storage system that has supported native event triggers and its own vector database, the process of creating vector embeddings and indexing them has until now been batch oriented and very cumbersome. Even in the most popular solutions today, directory and file system updates are never globally consistent and universally available, or as I like to say, atomic across systems, because data written to multiple locations isn't reflected in a searchable index until some post process batch ETL operation is completed. This makes it impossible to ensure that a retrieval operation serves up legitimate and current data to the enterprise. Permissions are also not atomic, where RAG pipelines periodically need to index ACLs from file systems and update ACL caches across a bunch of independent systems, and this leads to security and regulatory compliance challenges. Additionally, vector databases themselves were not designed to be very scalable, and the challenges of shared nothing architectures that they're built upon means that the bigger that they are, the slower they are to ingest into and the slower they are to search from. Most vector databases are designed to manage no more than a few 1000000000 embeddings. On the other hand, if you look at the rapid evolution of AI embedding models, the need for massive vector stores that can index trillions of embeddings and search on them in real time grows by the day, not just for the organizations that are going out and indexing the Internet, but for every enterprise organization sitting on tens of petabytes or even more. Now consider that most data platforms that support RAG require organizations to copy their data from their enterprise NAS systems into cloud object storage buckets in order to be able to access and embed enterprise data. This is a bit nuts. Enterprises should be able to easily index their enterprise file and object storage systems in place without the need to copy their data into some remote lakehouse. File systems need to evolve. Finally, databases also need to evolve. For text to SQL retrieval, we've been kidding ourselves to think that these are real time operations even when working with the fastest data warehouses. These scalable columnar databases were never designed for rapid record insertion or table updates, So they've always traded transactional performance for scale of capacity and query performance. And as a result, organizations need to build other transactional systems like Kafka clusters to intercept event streams and then batch data into data lakes using some background batch ETL process. This is a data consistency background batch ETL process. This is a data consistency problem that AI retrievers have not been designed to overcome. So we live in the state where AI engines never see a true real time and consistent picture of their data, where copies of enterprise file data are required to use some cloud based data platforms, and where the scale of unstructured data is about to overwhelm conventional vector databases. These are exactly the types of hard challenges that we love to solve. So let's talk about a radically better way to approach AI retrieval. And today, it's my pleasure to introduce to you the vast data insight engine with NVIDIA. This is the world's first real time AI data streaming, processing, and retrieval engine for all enterprise data. Insight Engine is the 1st enterprise application workflow that will host natively from within the vast data platform, and it's a continuation of our long time collaboration with NVIDIA. At its core, the Insight Engine extends the capability of the vast data platform while leaning heavily on core architectural advantages that allow us to eliminate trade offs associated with AI retrieval. Insight engine comes prepackaged with NVIDIA Inference microservices that are implemented on the vast OS container runtime that supports NVIDIA GPUs and leverages vast data engine triggers to automatically create embeddings from your unstructured data. The vast database is also being extended now to support vectors and graphs as new data types and can perform a similarity search across a database that can house an index trillions of embeddings. With our revolutionary days architecture, we've eliminated the need for partitioning across petabytes of metadata, making it possible for the system's processors to search through shared indices in constant time regardless of the system scale. This makes it possible to answer every vector search instantaneously. Insight engine also brings intelligence directly into your enterprise file and object storage infrastructure. This eliminates the need to copy data into some cloud based data lake, and our unified data architecture ensures that any file system or object storage update is automatically synced with the vector database and its indices. In addition to this, central to this idea of data atomicity is that user and attribute based access controls and end to end data provenance is also automatically synchronized. ACL information is embedded with the data elements at the source and globally managed. So you never need to worry about coordinating permissions management, and global data provenance ensures adherence to regulatory and compliance requirements. That's critical to every enterprise today. And finally, for text to SQL operations, here, our new support for a Kafka compatible event broker makes it possible to consolidate your event bus into the vast data platform and stream directly into tables. The system can support millions of inserts per second without needing a complex ETL operation. The result is not only are your unstructured data queries atomic, but your real time event data can now be atomic with your data warehouse, giving systems and users the best possible approach to deep analysis of real time data streams. This ensures that AI can work in real time across the enterprise. Not only is it unified in real time, but it's also simple. All of this comes tightly integrated as a prepackaged workflow that can run natively from within the vast data platform. This eliminates mountains of integration and data management complexity. As you can probably tell, I'm super excited to be talking about the VAST Insight engine with NVIDIA. We plan to deliver this broadly in the first half of twenty twenty five, but please reach out to us today if you wanna get involved early. I could go on and on about this new offering, and I will, but not right now. I invite you to join me later for a product deep dive that's hosted by me and our field CTO, Andy Pernsteiner. Having said that, as I think about the Build Beyond event from last year that we hosted, we laid out an idea for data that will be understood by AI systems. And this in turn can create new insights and discoveries as that data is interacted with by other AI systems. This tight coupling of data and accelerated computing is critical to us solving some of humanity's greatest challenges. No person knows this maybe more than Jensen Huang, the CEO of NVIDIA. Recently, he and Renen had the chance to sit down and discuss the future of accelerated data driven computing. Renen. Hi, Jensen. Nice to see you. Nice to see you too. First, thank you for having us and for all of the great collaboration that we've been doing over the last eight and a half years since we started. Yeah. I tell people often we could not have built any of the technology that we have without standing on your shoulders on NVIDIA technology. Because of the work we've done together, we have this super accelerated and high performance data fabric that we can bring to the world. And and as a result, we work in so many different areas together. All of these AI supercomputers that we work on together, and today's data is, of course, volumous. Yes. But, it's also structured, unstructured, and it's growing incredibly. And, the more we use AI, the more data we collect, the more data we can use to train better AIs. That flywheel's incredible. Yeah. Well, since the beginning of computation, it's been logic and data. You need processing units and memory. You need algorithms and data structures. Yeah. And now with your accelerated computing, we can analyze and understand natural data, pictures and video and sound, no longer just numbers and rows of a database the way it was before. You know, that's kinda where our journey started together. All of the great employees at VAST, incredible computer scientists that that are deeply knowledgeable about the data layer. And, of course, we're really good at networking. We're really good at computing. Between the two of us, we really solve a lot of the problems that are challenging for the large frontier model makers. Yeah. Somebody said, the wheel was a great invention, but it didn't create better wheels. I think AI is the first invention that will create better AIs by leveraging itself. Yeah. And we're just at the beginning of that, as you said, with the large model builders. Yeah. How do you see this play out as it shifts from training to more weight on inference, from batch to real time, from the research labs into the enterprise? Well, we're seeing we're seeing a couple of trends going on right now. Of course, one of the most important trends is moving towards multi modality. We have a lot of our knowledge embedded in language, but when you augment it with, images and video and audio, then the then the language becomes much, much more robust. The second thing we're seeing, advancing into, of course, is, instead of just one shot models, we're now building models that could, have multistep reasoning. And just like just like humans, we reason through things. The concept of of 2 of us, having our own intelligence, but we're talking to each other, debating, fleshing out an idea, That's no different in the future than 2 large language models, discussing, debating, fleshing out ideas, and so they're generating data for each other to learn from. Once you can create these frontier models, it's kind of like a teacher model. So these teacher models could, teach smaller models, open sourcing these, small language models or distillation of models has has really opened up and activated the next part, the next wave of AI, which is enterprise. Yeah. And this is this is the next journey for the 2 of us, you know, we're we're now setting up the stage, to, for the world's enterprise to really benefit from AI because there's so many different workloads that we do in our companies that can be helped, like, augmented with AI so that we can enhance automation. Yeah. So Historically, the shift from a new frontier into the enterprise was always helped by a software infrastructure stack that made it simpler for enterprises to adopt, who don't have teams of PhDs to set up infrastructure and made it secure so that they don't need to compromise their data in order to benefit from these new abilities. The the enterprise computing platform is complicated because, there's, security, sovereignty, data gravity, access control. You know, just because it's inside a company, all of the data within the company is not accessible to everybody within the company. Of course, there's the the data layer and the security layer, access control layer. VAST does an incredible job there. And so that's the first layer, the networking layer. We've been working on bringing AI networking to enterprise. And the reason for that is because all the things that we were talking about before for, large model makers, pioneering model makers, that those expertise, needs to be codified on some platform so that it's easy for enterprises to consume. Everything from data curation, fine tuning, guard railing, you know, all of that, you know, all of that journey, you've now worked with us to take the NVIDIANEMO services and platform, and you've encapsulated it into the vast AI data platform. And the reason was exactly as we were talking about earlier, AI Foundry, sits on top of this entire stack, and it, it's about, helping customers take their proprietary data that's sitting in vast vast amount of data sitting in vast, and is proprietary and it's very precious to them. It's, you know, really their gold mine. Yeah. And they would like to take that asset, that incredible data, domain specific, company specific data, and transform it into digital intelligence. So now we've we've created really, this, entire ecosystem that makes it possible for every enterprise to be able to engage AI, transform their data into their own digital intelligence, and connect it into a flywheel Yeah. That sits on top of VAST and NVIDIA. Yeah. And, today, we're announcing our first joint collaborative, project around enterprise AI that we call the VAST, Insight Engine. And, obviously, leveraging Nexmo and NIMS and all of these pieces of the puzzle that NVIDIA has built to bring it into one easy to use secure pipeline for these enterprise. And it's and it would be counterintuitive for a lot of people to think, why why is it that AI, is embedded in your AI AI data platform? And the reason was exactly as we were talking about earlier. It takes AI to figure out what data you should train with. Well, you mentioned before about AI is interacting with each other and AI agents talking to each other without a strict API, the way we need in between machines before. No. Do you really see that type of interaction, that brainstorm, achieving new ideas, new scientific theories, new mathematical proofs Mhmm. That maybe us humans didn't think of yet? Yeah. Absolutely. Well, let's see about let's talk about how the enterprises and researchers, engineers, currently are moving. They're moving from human written code to a agentec AI driven workflow. And so one of the most important things that are that's happening is is, the development, of AI workflows that can help enterprises. In the future, I could I could totally imagine the next step being, you you you telling an AI that this is my basic mission, and this is what a good result would look like. And this is these are all of the data that you can access based on your access control on the vast AI data platform. And based on those givens, that AI has to go on one of our repository in our in our company database, and it sees, okay, based on the the work, the mission that I have, I think I need these 3 team members, those 2 team members, And so this AI figures out its own assembly of team members, its own compute graph, and and it orchestrates among the among the members. And the way that the AIs talk to each other are gonna be kinda like the way humans talk to each other. I wanted to thank you for everything you did for charging the path for all of us to follow and, for collaborating between VAST and NVIDIA over the last years. It's been a joy to work with your teams, and, hopefully, we see this future that you describe, very, very quickly and Well, Brennan, the the the vast talent that we've had the joy of working with in your company and the vast frontier we have in front of us is a perfect opportunity for VAST, and I'm very proud of you. And I'm proud of the team that that, you've assembled and, the joy of our 2 companies working together and, the, the groundbreaking work that we do together from from accelerating the data the data plane, to now bringing AI to the data plane, and, working together to bring AI to the world's enterprise. Yeah. Each one of these plat each one of these frontiers that we're going through together, expands the reach of accelerated computing, expands the reach of AI. So, it's been a great great, great eight and a half years working together, and I look forward to the next 80 years working together. I do too. Okay? AI started by learning from data. It is now shifting to learning directly from humans and from human interaction. In the future, AI will learn from other AIs and build a better AI in the process. As Jensen and I discussed, what makes AI unique is that it is the first invention that will invent better versions of itself over time. This is where agents come into play. Just as people interact with each other without strict APIs, the same will be true of computers. Our own personal travel agent will interact with the airline's booking agent to find us the best flight. More interestingly, misunderstandings between people form the cracks that make room for new ideas to emerge. This new form of fuzzy interaction will let AI agents brainstorm and come up with new theories. Access to the natural world will allow them to test these theories and advance our collective understanding of the world that surrounds us. When we have a new feature to develop, we assemble a team. It will have an architect, a few developers, and some QA engineers. Members of the team will interact in a way that improves the outcome, helping each other and challenging each other to build the best product. What if we could do the same with AI agents? Today, ServiceNow is already going down this path. So we certainly envision a future where every worker is empowered by a set of AI agents to solve more complex problems and do do their work faster. For example, in the near future, we will introduce AI agents that decompose complex problems automatically into simpler steps. Think about, for example, a complex query that requires multiple searches to do and then piecing those results together into a single answer. So this really lets us get a window into a future where all of our work will be made easier by a collection of agents that work on our behalf. More broadly across the industry, we will continue to see a shift towards deploying AI agents in production, serving extremely complex workloads that we simply cannot think of today. And, of course, in in that, we want the ServiceNow platform to become a sort of control tower for those fleets of AI agents that will be getting work done for us in the enterprise. So for us, VAST was an easy choice. I mean, it was NVIDIA certified, and we were buying an NVIDIA certified SuperPOD. So it was a very good starting point, but also, VAST has both the familiar interface with all the feature that they expect without kind of the baggage and trouble. And finally, just in terms of the daily operation, like the vast cluster we have soaks up, all our model checkpoints without skipping a beat. So that's the most important thing for us in terms of speed because it doesn't pause or train runs while we write. So in all of that, at ServiceNow, we have chosen to focus on what we are good at, which is writing great software and training great models and partner with folks like NVIDIA and VAST in order to provide us the missing infrastructure pieces that let us be great at our core business. As enterprise adoption of AI accelerates, our mission is to simplify this journey for our customers. We are excited to announce significant strides forward with our close partners, Cisco and Equinix. Cisco Hyperfabric for AI combines Cisco's Nexus switches and UCS servers with the vast data platform, creating a scalable integrated infrastructure tailored for AI driven enterprises. Together with Equinix, we're providing a centralized managed private cloud that can feed data hungry AI applications regardless of where the data resides. Partnerships are at the heart of innovation. Transforming the AI landscape is a collaborative effort, and we are deeply committed to advancing side by side with our extraordinary partners. Let's hear directly from several of them about how collaboration is driving this new AI era forward. The partnership that we have with VAST and NVIDIA is is very unique and it it's got such potential. Potential that we're realizing right now with real customers today, you know, whether it's hyperscalers or it's global banks, but companies around the globe that are looking at and are leaning into this AI transformation. We're incredibly excited to announce that we're able to support the vast solution inside of our managed private AI environment. For our enterprise customers, our managed private AI offering means that they don't need to worry about any of the complexity of the underlying infrastructure and kind of ongoing maintenance of the infrastructure there. And that means that they can go ahead and just make sure they're using the infrastructure with their data scientists. Managing data centers is hard, and AI makes it even harder. You know, one of the things that we've found as we've talked to customers is for some of the most sophisticated AI workloads, we're actually seeing VAST being the choice, that customers have made. Just given the success that VAST has had, it was a pretty easy choice to make sure that we worked closely with VAST and worked closely with NVIDIA to provide a full stack. We're solving fundamental problems that are the impediment to this creativity. And what we enjoy about our relationship with VAST is VAST has built a product that and built a culture that is every bit as focused on solving the problems and providing solutions to the clients as we are. And to work with a group that is able to, boil down the, the process of building and running a business to that fundamental belief, which is the same fundamental belief that we subscribe to, makes the partnership incredibly accretive to both businesses and certainly to our collective clients in a way that is really unique. We have been working together for a while, and we together to service lots of customer, including, some enterprise, some huge data center. So it has been a very successful, story, and, we will continue to, put in the best total solution to the market. To put it in a nutshell, the the convergence of data with storage and all of that being enabled by the same platform, forwards a level of simplicity for the end users at a cost point in performance that is simply not available today. In the AI era, data is the lifeblood of innovation. However, realizing AI's full potential is a complex journey, often hindered by fragmented technology, high cost of entry, and the need to enhance skills for building, deploying, and leveraging these new tools. As with any major technology shift, early adopters turn to user communities to share experiences, collaborate, explore new ideas, and accelerate their own learning. At VAST, our goal is to help AI practitioners across all industries and regions achieve new levels of success. We're committed to investing in and contributing to a community of individuals and organizations that collaborate to share best practices, discuss new use cases, and elevate each member's experience. We'll bring together influential leaders from every dimension of the AI ecosystem, notable experts at all stages of AI implementation, and foster a community of builders. I invite you to hear from a few of these experts as they share their vision of AI and the importance of collaboration in ensuring its rapid and efficient adoption. Humans are at their best when they collaborate. When you think about Cosmos and you think about bringing a community together, that's gonna be really, really important as we move forward. You know, it used to be we had these very vertical, vendor solutions, but what's really changed over the last several years is that the developers, want to share and love to share. Overall, I think the collaboration is gonna help us move faster. We're gonna learn more together. We we are still, I think, in the, you know, 2nd or 3rd inning of of the AI journey at this point. That aspect of working with others is essential to make progress. Like, no company can work in an island and be successful. This is moving forward so fast. There's so much to learn still. Community is essential. We're gonna need the the community to come together and say, hey. We've we've ran our use cases, and it's it's it's actually good enough for this to be certified as a solution template that can go into a standard repository that can be deployed at broadly, right? So we want to have a community that does that. Being able to bring together a community, especially enterprises, right? They want to learn on on how fast things are moving and what are the best practices and what are people doing today, specifically in the world of AI. It's going to allow for, idea creation and connections to be made. I think Lambda is super excited to participate in that. All the people that work together, bring different idea, bring different challenge, and then we together fix the problem, improve the task, and make AI even much more powerful, much faster, much more safe, and eventually to benefit everyone. I would like Cosmos to become that forum, that community for everybody to participate and learn, for everybody to bring their best, and to develop something new, to share ideas, to break barriers. We're on a journey of discovery of how to use these technologies, how these use cases are gonna evolve, what the best infrastructure patterns and kind of design patterns emerging on the software side are gonna be. And I think creating a tight community where you can have open discussion, be able to share best practices, be able to share fast failures that have happened, and be able to say, what's working for you and how can I go ahead and learn from that is incredibly powerful? Just sharing the knowledge and the things that we learn along the way. If the entire community can work together on creating the best practices, creating the the right way to do AI, right, together, that's that would be amazing. We really believe in the power of bringing people together towards a common cause to improve AI and make it beneficial for all of humanity. Building community in AI is super important. None of us succeed alone. And the technology, the ecosystem is so complicated. I think all of us do better together. I'm a big believer in, co creation and in the ability to collaborate with industry partners and even to the extent, understand how to leverage, the competitive tension that's in the market with a lot of players that are trying to, support the evolution of this new phase of AI. And Cosmos is an interesting way that I think most companies that will be involved in this program can, gain benefit from that ecosystem that's being created. Building out a community of innovators and collaborators all working together is going to be part of making all of the scale that fits right in line with what Chan Zuckerberg Initiative wants to do with our community efforts to grow researchers' ability to scale without limitations. Getting, inputs from the experts in the field and also sharing input, allows us to to grow together, and deliver a better experience than anyone else else can deliver. The hardest problems are never solved by any one person. This is gonna be done by partnerships up and down the supply chain with regulators, with technology companies, with banks. AI is an incredibly democratizing force if we'll let it be, so. We couldn't be more excited to collaborate with these amazing industry leaders whose companies are paving the way for AI technology. And in the spirit of collaboration, I'm excited to welcome you to join us on the journey as we introduce Cosmos, an ambitious initiative designed to develop and foster a vibrant AI community to simplify and accelerate AI adoption and pioneer the next frontier of innovation. Cosmos will open lines of communication and provide a real time collaboration platform for practitioners, researchers, and solution providers to exchange ideas and knowledge. Organizations can leverage Cosmos to take advantage of new AI labs and a growing library of proven reference architectures from leading vendors, service and solution providers, educational institutions, and innovators from every sector to educate and train community members and drive AI adoption and innovation. And because Cosmos is fundamentally a user community, we felt it would be a huge miss if we weren't to bring in organizations that are building the AI models of tomorrow. And so today, I'm pleased to announce XAI as founding member of Cosmos, representing the agenda of foundation model builders. So to all of our customers and partners, current and future, we invite you to join us in this bold adventure. Together, we can build an ecosystem where AI technologies are seamlessly integrated, effectively implemented, and where talent is more than just developed, it's fully realized. So let me be the 1st to welcome you to the cosmos. Welcome to cosmos. Welcome to cosmos. Welcome to cosmos. Welcome to cosmos. Welcome to the cosmos. Welcome to the cosmos. Welcome to cosmos. Welcome to cosmos. Welcome to cosmos. Welcome to cosmos. Everyone, welcome to cosmos. Wow. What an incredible day it's been. I wanna say a special thank you to the many customers, partners, and thought leaders who joined us today. And to the team at VAST who continue to raise the bar and extend its lead in data and AI infrastructure with new technology, powerful ecosystem partnerships, and investment in community, ensuring that every organization and each of you as individuals can fully participate in the AI revolution. I hope you're as excited and inspired as I am today. Now I invite you to continue your journey with us. Dive into the online breakout sessions and additional on demand content organized in 5 tracks. AI thought leaders, the bold predictions that are shaping tomorrow. AI powered business, real world examples of organizations using AI to drive success, AI cloud innovators, pushing the boundaries of AI in the cloud, AI data stack builders, the foundation behind AI built by the brightest engineers, Hands on, AI and data management in practice with practical tools, labs, and demos of capabilities you can apply right away. Join us online at the cosmos dotai or in your local community at one of our upcoming world tour events, and be sure to visit us at vastdata.com. Thank you.