Shawn Simmons - NVIDIA Corp. Colette M. Kress - NVIDIA Corp. Jen-Hsun Huang - NVIDIA Corp..
Mark Lipacis - Jefferies LLC Vivek Arya - Bank of America Merrill Lynch C.J. Muse - Evercore Group LLC Toshiya Hari - Goldman Sachs & Co. Atif Malik - Citigroup Global Markets, Inc. Craig A. Ellis - B. Riley & Co. LLC Hans Mosesmann - Rosenblatt Securities, Inc. Joseph L. Moore - Morgan Stanley & Co. LLC Blayne Curtis - Barclays Capital, Inc.
Mitch Steves - RBC Capital Markets LLC.
Good afternoon. My name is Victoria, and I'm your conference operator for today. Welcome to NVIDIA's financial results conference call. Thank you. I'll now turn the call over to Shawn Simmons from Investor Relations. Begin your conference..
Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the first quarter of fiscal 2018. With me on the call today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.
I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. It's also being recorded. You can hear a replay via telephone until May 16, 2017. The webcast will be available for replay up until next quarter's conference call to discuss Q2 financial results. The content of today's call is NVIDIA's property.
It can't be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risk factors and uncertainties, and our actual results may differ materially.
For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission.
All our statements are made as of today, May 9, 2017, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures.
You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Colette..
Thanks, Shawn. We had a strong start to the year. Highlighting our record first quarter was a near tripling of data center revenue, reflecting surging interest in artificial intelligence. Overall, quarterly revenue reached $1.94 billion, up 48% from the year earlier, down 11% sequentially and above our outlook of $1.9 billion.
Growth remained broad-based, with year-on-year gains in each of our four platforms, gaming, professional visualization, data center, and automotive. From a reporting segment perspective, Q1 GPU revenue grew 45% to $1.56 billion from a year earlier, and Tegra processor revenue more than doubled to $332 million.
And we recognized the remaining $43 million in revenue from our Intel agreement. Let's start with our gaming platforms. Gaming revenue in the first quarter was $1.03 billion, up 49% year on year. Gamers continue to show great interest in the Pascal-based GPUs, including gaming notebooks. Our Tegra gaming platform also did extremely well.
Demand remained healthy for our enthusiast class GeForce GTX 1080 GPU, introduced nearly a year ago. It was complemented this past quarter with the GTX 1080 Ti, which runs 35% faster and was launched at the annual Game Developers Conference in San Francisco. The GTX 1080 Ti is designed to handle the demands of 4K gaming and high-end VR experiences.
Typical of many supportive reviews, Ars Technica stated it is undoubtedly a fantastic piece of engineering, cool, quiet, and without rival. Those that demand the absolute very best in cutting-edge graphics need look no further.
We also released the next generation of our TITAN-class product, the TITAN Xp, designed for enthusiasts and researchers who demand extreme performance. Gaming continues to be driven by the headlong growth in e-sports. The newest title, Overwatch, added 30 million gamers in its first year.
GeForce was the graphics platform of choice at all the top e-sports tournaments, including the finals of the big four international competitions. With apologies to the start of the baseball season, e-sports is now as popular among U.S. male millennials as America's favorite pastime. More people watch gaming than HBO, Netflix, ESPN, and Hulu combined.
GeForce sales remained underpinned by the steady stream of AAA titles coming onto market, which continued to push for more chip GPU performance. In the months ahead, we'll see a series of highly anticipated blockbuster titles.
Among them are Destiny 2 coming to the PC for the first time, Star Wars Battlefront II, Shadow of War, and the next installment of the Call of Duty franchise, World War II. We are excited to be working within Nintendo on its acclaimed Switch gaming system.
Great reviews and reports of the system selling out in many geographies are a strong part of this platform. Moving to professional visualization, Quadro revenue grew to $205 million, up 8% from a year ago, amid continued demand for high-end real-time rendering and more powerful mobile workstations.
We are seeing significant increase in professional VR solutions, driven by Quadro P6000 GPUs. Lockheed Martin is deploying Quadro to create realistic VR walk-throughs of the U.S. Navy's most advanced ships. The Marines utilize VR to train air crew personnel.
And IKEA is rolling out VR to many of its stores, helping consumers configure their kitchens from a huge array of options, which they can visualize in sharp detail. Next, data center, record revenue of $409 million was nearly triple that of a year ago. The 38% rise from Q4 marked its seventh consecutive quarter of sequential improvement.
Driving growth was demand from cloud service providers and enterprises building training clusters for web services, plus strong gains in high-performance computing, GRID graphics virtualization, and our DGX-1 AI super-computer. AI has quickly emerged as the single most powerful force in technology, and at the center of AI are NVIDIA GPUs.
All of the world's major Internet and cloud service providers now use NVIDIA Tesla-based GPU accelerators, AWS, Facebook, Google, IBM, and Microsoft as well as Alibaba, Baidu, and Tencent. We also announced that Microsoft is bringing NVIDIA Tesla P100 and P40 GPUs to its Azure cloud.
Organizations are increasingly building out AI-enabled applications using training clusters, evident in part by growing demand for DGX-1. We are seeing a number of significant deals.
Among them are Fujitsu's installment of 24 systems integrated into an AI supercomputer for RIKEN, Japan's largest research center, as well as new supercomputers at Oxford University, GE, and Audi. Working with Facebook, we announced the launch of the Caffe2 deep learning framework as well as Big Basin servers with Tesla P100 GPUs.
To help meet huge demand for expertise in the field of AI, we announced earlier today plans to train 100 people this year through the NVIDIA Deep Learning Institute, representing a 10x increase from last year.
Through onsite training, public events, and online courses, DLI provides practical training on the tools of AI to developers, data scientists, and researchers. Our HPC business doubled year on year, driven by the adoption of Tesla GPUs into supercomputing centers worldwide.
The use of AI and accelerated computing in HPC is driving additional demand in governance intelligence, higher education research, and finance. Our GRID graphics virtualization business more than tripled, driven by growth in business services, education, and automotive.
Intuit's latest TurboTax release deploys GRID to connect tax filers seeking real-time advice with CPAs. And Honda is using GRID to bring together engineering and design teams based in different countries. Finally, automotive, revenue grew to a record $140 million, up 24% year over year and 9% sequentially, primarily from infotainment modules.
We are continuing to expand our partnerships with companies using AI to address the complex problems of autonomous driving. Since our DRIVE PX 2 AI car platform began shipping just one year ago, more than 225 car and truck makers, suppliers, research organizations, and startups have begun developing with it.
That number has grown by more than 50% in the past quarter alone, the result of the platform's enhanced processing power and the introduction of Tensor RT for its in-vehicle AI inferencing. This quarter, we announced two important partnerships.
Bosch, the world's largest auto supplier, which does business all over the world's carmakers, is working to create a new AI self-driving car computer based on our Xavier platform. And PACCAR, one of the largest truck makers, is developing self-driving solutions for Peterbilt, Kenworth, and DAF.
We continue to view AI as the only solution for autonomous driving. The nearly infinite range of road conditions, traffic patterns, and unexpected events are impossible to anticipate with hand-coded software or computer vision alone.
We expect our DRIVE PX 2 AI platform to be capable of delivering Level 3 autonomy for cars, trucks, and shuttles by the end of the year, with Level 4 autonomy moving into production by the end of 2018.
Now turning to the rest of the Q1 income statement, GAAP and non-GAAP gross margins for the first quarter were 59.4% and 59.6% respectively, reflecting the decline in Intel licensing revenue. Q1 GAAP operating expenses were $596 million. Non-GAAP operating expenses were $517 million, up 17% from a year ago, reflecting hiring for our growth initiative.
GAAP operating income was $554 million and non-GAAP operating income was $637 million, nearly doubling from a year ago. For the first quarter, GAAP net income was $507 million. Non-GAAP net income was $533 million, more than doubling from a year ago, reflecting revenue strength as well as gross margin and operating margin expansion.
For fiscal 2018, we intend to return approximately $1.25 billion to shareholders through share repurchases and quarterly cash dividends. In Q1, we issued $82 million in quarterly cash dividends. Now turning to the outlook for the second quarter of fiscal 2018, we expect revenue to be $1.95 billion plus or minus 2%.
Excluding the expiry of the Intel licensing agreement, total revenue is expected to grow 3% sequentially. GAAP and non-GAAP gross margins are expected to be 58.4%, 58.6% respectively plus or minus 50 basis points. These reflect approximately a 100 basis points impact from the expiry of the Intel licensing agreement.
GAAP operating expenses are expected to be approximately $605 million. Non-GAAP operating expenses are expected to be approximately $530 million. GAAP OI&E is expected to be an expense of approximately $8 million, inclusive of additional charges from early conversions of convertible notes.
Non-GAAP OI&E is expected to be an expense of approximately $3 million. GAAP and non-GAAP tax rates for the second quarter of fiscal 2018 are both expected to be 17% plus or minus 1%, excluding discrete items. Further financial details are included in the CFO commentary and other information available on our IR website.
Finally, this week we are sponsoring our annual GPU Technology Conference here in Silicon Valley. Reflecting the surging importance of accelerating computing, GTC has grown to more than 7,000 attendees from 60 countries, up from 1,000 when we started eight years ago. Among its highlights, Jen-Hsun will deliver a news-filled keynote tomorrow morning.
We have 550-plus talks, more than half on AI. Developers will have access to 70 labs and workshops to learn about deep learning and GPU computing. And we will award a total of $1.5 million to the six most promising companies among the 1,300 in our Inception program for AI startups.
We will be hosting our annual Investor Day tomorrow and hope to see many of you there. We will now open the call for questions. Please limit your questions to two.
Operator, will you please poll for the questions?.
Certainly. Your first question comes from the line of Mark Lipacis from Jefferies..
GPU as a service versus the cloud companies' own AI effort. And I'm hoping you could help us understand to the extent where the demand is falling into either one of those buckets. And then on the enterprise side, I think there's a view out there that the enterprise is going to the cloud.
So to hear you talk about training clusters for web services is very interesting, and I was hoping you could provide some more color on that demand driver..
Yeah, Mark, thanks for your question. So our GPU computing business for data center is growing very fast, and it's growing on multiple dimensions. On the one hand, there's high-performance computing using traditional numerical methods. We call that HPC. That's growing. There's in enterprise the virtualization of graphics.
There's a whole lot of desktop PCs running around. However, more and more people would like to have thinner laptops or they would like to have a different type of computer and still be able to run Windows. And they would like to virtualize basically their entire PC and put it in the data center. It's easier to manage.
The total cost of ownership is lower. And mobile employees could enjoy their work wherever they happen to be. And so the second pillar of that is called GRID, and it's basically virtualizing the PC. And as you can tell, virtualization, mobility, better security, those are all driving forces there. And then there's the Internet companies.
And the Internet companies, as you mentioned, really have two pillars.
There's the Internet service provision part, where they're using deep learning for their own applications, whether it's photo tagging or product recommendation or recommending a restaurant or something you should buy or personalizing your webpage, helping you with search, provisioning up the right apps, the right advertisement, language translation, speech recognition, so on and so forth.
There's a whole bunch of amazing applications that are made possible by deep learning. And so Internet service providers are using it for internal application development. And then lastly, what you mentioned is cloud service providers.
And basically, because of the adoption of GPUs and because of the success of CUDA and so many applications are now able to be accelerated on GPUs, so that we can extend the capabilities of Moore's Law so that we can continue to have the benefits of computing acceleration, which in the cloud means reducing cost.
That's on the cloud service provider side of the Internet companies. So that would be Amazon Web Services, it's the Google Compute cloud, Microsoft Azure, the IBM cloud, Alibaba's Aliyun (18:31 – 19:03) by Microsoft Azure.
We're starting to see almost every single cloud service around the world standardizing on VM video architecture, so we're seeing a lot of growth there as well. So I think the nut of it all is that we're seeing data center growth and GPU computing across the board..
As a follow-up if I may, on the gaming side, what we have observed over time is that when you launch a new platform, it definitely creates demand, and you see 12 months of very good visibility into growth. And I was wondering if you see the data center numbers come in quarter after quarter here.
To what extent do you think the data center demand that you're seeing is – I know probably you're only able to answer qualitatively.
But to what extent do you think the data center is secular versus you have a new platform and there's just platform-driven demand?.
PC gaming is growing. There's no question about that. E-sports is growing the number of players in e-sports. The number of people who are enjoying e-sports is growing. MOBA [Multiplayer Online Battle Arena] is growing. I think it's amazing the growth of MOBA and the latest games. And of course, the first-party titles, the AAA titles are doing great.
Battlefield is doing great, and I'm looking forward to the new Battlefield. I'm looking forward to the new Star Wars and I'm looking forward to the first time that Destiny is coming to the PC. As you know, it was a super hit on consoles, but the first-generation Destiny wasn't available on PC.
Destiny 2 is coming to the PC, so I think the anticipation is pretty great. So I would say that PC gaming continues to grow, and it's hard to imagine people [audio gap] (21:03 – 21:39) around in another amazing world. So I think people are going to be amazed at how long the alternate reality of the videogame market is going to continue..
Your next question comes from the line of Vivek Arya from Merrill Lynch..
Thanks for taking my question and congratulations on the solid results and execution. Jen-Hsun, for my first one, it's on the competitive landscape in your data center business. There has been more noise around FPGA or CPU or ASIC solutions also chasing the same market.
What do you think is NVIDIA's sustainable competitive advantage? And what role is CUDA playing in helping you maintain this lead in this business?.
Vivek, thanks for the question. First of all, it's really important to understand that the data centers, the cloud service providers, the Internet companies, they all get lumped together in one conversation. But obviously, the way they use computers are very different.
There are three major pillars of computing up in the cloud or in large data centers, hyperscale. One pillar is just internal use of computing systems, for developing, for training, for advancing artificial intelligence. That's a high-performance computing problem. It's a very complicated software problem. The algorithms are changing all the time.
They're incredibly complicated. The work that the AI researchers are doing are not trivial, and that's why they're in such great demand. And it's also the reason why computing resources have to be provisioned to them so that they can be productive.
Having a scarce AI researcher waiting around for a computer to finish simulation or training is really quite unacceptable. And so that first pillar is the market that we – is a segment of the (23:46 – 24:18) once the network is trained, it is put into production. Like for example, your Alexa speakers have a little tiny network inside.
And so obviously, you can do inferencing on Alexa. It does voice recognition on a hot keyword. In the long term, your car will be able to do voice recognition and speech recognition.
(24:42) Are we okay? Are we still on?.
Yes. I think the....
No, Vivek, I was wondering whether the phone line was cut or not. So anyways, the second pillar is inferencing. And inferencing as it turns out is far, far less complicated than training. It's a trillion times less complicated, a billion times. It's a trillion times less complicated. And so once the network is trained, it can be deployed.
And there are thousands of networks that are going to be running inside these hyperscale data centers, thousands of different networks, not one, thousands of different types. And they're detecting all kinds of different things.
They're inferring all kinds of different things, classifying, predicting, all kinds of different things, whether it's photo or voice or videos or searches or whatnot. And in that particular case, our advantage – in that particular case, the current incumbent is CPUs.
The CPU is really the only processor at the moment that has the ability to basically run every single network. And I think that's a real opportunity for us, and it's a growth opportunity for us. And one would suggest that FPGA is as well. One would suggest that ASICs like TPUs, TPUs and ASIC is as well.
And I would urge you to come to the keynote tomorrow, and maybe I'll say a few words about that tomorrow as well. And then the last pillar is cloud service providers, and that's basically the outward public cloud provisioning a computing approach. It's not about provisioning inferencing. It's not about provisioning GPUs.
It's really provisioning a computing platform.
And that's one of the reasons why the NVIDIA CUDA platform and all of our software stack that we've created over time, whether it's for deep learning or molecular dynamics or all kinds of high-performance computing codes or linear algebra or computer graphics, all of our different software stacks make our cloud computing platform valuable, and that's why it's become the industry standard for GPU computing.
And so those are three different pillars of hyperscalers, and it's just important to segment them so that we don't get confused..
That's very helpful. And as my quick follow-up, Jen-Hsun, there is a perception that your gaming business has been driven a lot more by pricing and adoption of more premium product, and hence there could be some kind of feeling to how much gamers are willing to pay for these products.
Could you address that? Are you seeing the number of gamers and the number of cards grow, and how long can they continue to reach for more premium products? Thank you..
The average selling price of the NVIDIA GeForce is about a third of a game console. That's the way to think about it. That's the simple math. People are willing to spend $200, $300, $400, $500 for a new game console, and the NVIDIA GeForce GPU PC gaming card is on average far less. There are people who just absolutely demand the best.
And the reason for that is because they're driving a monitor or they're driving multiple monitors at a refresh rate well beyond a TV.
So if you have a 4K or you want 120 hertz or some people are even driving it to 200 hertz, those kind of displays demand a lot more horsepower to drive than an average television, whether it's 1080p or 4K at 60 frames a second or 30 frames a second. And so the amount of horsepower they need is great.
But that's just because they just really love their rig, and they're surrounded in it, and they just want the best. But the way to think about that is ultimately that's the opportunity for us. I think GeForce is a game console.
And the right way to think about that is at an equivalent ASP of some $200 – $300, that's probably potentially an opportunity ahead for GeForce..
Your next question comes from the line of C.J. Muse with Evercore..
Good afternoon, thank you for taking my questions. I guess first question is around gaming. I was hoping you could walk through how you're thinking about seasonality here in calendar 2017, particularly as Pascal launch calendarizes and you get both to launch coming I presume early 2018.
I would love to hear your thoughts on how we should think about the trajectory of that business..
First of all, GeForce is sold a unit at a time and it's sold all over the world and it's a consumer product. It's a product that is sold both into our installed base as well as growing our installed base. When we think about GeForce, these are the parameters involved.
How much of our installed base has upgraded to Pascal? How much of our installed base is growing? How is gaming growing overall? What are the driving dynamics of gaming, whether it's team sports or MOBA or using games for artistic expression? It's related to the AAA titles that are coming out. Some years the games are just incredible.
Some years the games are less incredible. These days the production quality of the games have just become systematically so good that we've had years now of blockbuster hits. So these are really the dimensions of it.
And then it's overlaid on top of it with some seasonality because people do buy graphics cards and game consoles for Christmas and the holidays, and there are international holidays where people are given money as gifts and they save up the money for a new game console or a new game platform.
And so in a lot of ways our business is driven by games, so it's not unlike the characteristics of the rest of the gaming industry..
Very helpful. I guess as my follow-up, on the inventory side, that grew I think 3% sequentially. Can you walk through the moving parts there? What's driving that, and is foundry diversification part of that? Thank you..
The driving reasons for inventory growth is new products, and that's probably all I ought to say for now. I would come to GTC. Come to the keynote tomorrow. I think it will be fun..
Great, thanks a lot..
Yeah, thanks, C.J..
Your next question comes from the line of Toshiya Hari from Goldman Sachs..
Hi, congrats on the strong quarter.
Jen-Hsun, can you maybe talk a little bit about the breadth of your customer base in data center relative to maybe 12 months ago? Are you seeing the same customer group buy more GPUs, or is the growth in your business more a function of the broadening of your customer base?.
Thanks, Toshiya. Let me think here. I think one year ago – one year ago was – maybe it was two years ago. Maybe it was somewhere between 18 months ago or so when I think Jeff Dean gave a talk where he said that Google was using a lot of GPUs for deep learning.
I think it wasn't much longer ago than that, and really that was the only public customer that we had in the hyperscale data center. Fast-forward a couple years, we now have basically everybody.
Every hyperscaler in the world is using NVIDIA for either deep learning, for some announcements that you'll read about in data center deployment, tomorrow hopefully. And then a lot of them have now standardized on provisioning the NVIDIA architecture in the cloud.
And so I guess in the course of one or two years, we went from basically hyperscale being an insignificant part of our overall business to quite a large part of our business, and as you can see, also the fastest-growing part of the business..
Okay. And then as my follow-up, I had a question for Colette. Three months ago, I think you went out of your way to guide data center up sequentially. And for the July quarter, ex the Intel business going away, you're guiding revenue up 3% sequentially. Can you maybe provide some additional color for the individual segment? Thank you..
Thanks for the question. We feel good about the guidance that we're providing for Q2. We wanted to make sure that it was understood the impact of Intel that's incorporated in there.
It's still too early, given that it's – to say about the same size of what we just finished in Q1 to make comments specifically exactly where we think each one of those businesses will end up. But again, we do believe data center is a super-great opportunity for us. I think you'll hear more about that tomorrow.
But we don't have any more additional details on our guidance, but we feel good about the guidance that we gave..
Thank you..
Your next question comes from the line of Atif Malik from Citigroup..
Hi, thanks for taking my question and congratulations on the strong results and guide. Jen-Hsun, can you talk about the adoption of GPU in the cloud? At the CES earlier this year, you guys announced GeForce NOW. Curious how the adoption of GeForce is going..
Yes, Atif, thanks for the question. GeForce NOW is really an exciting platform. It virtualizes GeForce. It puts it in the cloud, turns it into a gaming PC that's a service, that can be streamed as a service. And I said at GTC that around this time that we'll likely open it up for external beta.
We've been running internal beta for some time, and we'll shortly go to external beta. And the last time I checked, there's many, many tens of thousands of people who are signed up for external beta trials. And so I'm looking forward to letting people try it.
But the important thing to realize about that is that's still years away from really becoming a major gaming service. And it's still years away from being able to find the right balance between cost and quality of service and the pervasiveness of virtualizing a gaming PC. So we've been working on it for several years, and these things take a while.
My personal experience is almost every great thing takes about a decade. And if that's so, then we've got a few more years to go..
Great.
As a follow-up, with your win and success in Nintendo Switch, does that open up the console market with other console makers? Is that a business that is of interest to you?.
Consoles is not really a business to us. It's a business to them. And we're selected to work on these consoles. And if it makes sense and the strategic alignment is great and we're in a position to be able to do it, because the opportunity cost of building a game console is quite high.
The number of engineers who know how to build computing platforms like this – and in the case of the Nintendo Switch, it's just an incredible console that fits in such a small form factor. And it could both be a mobile gaming device as well as a console gaming device. It's just really quite amazing, and they just did an amazing job.
Somebody asked me a few months ago before it was launched how I thought it was going to do. And of course without saying anything about it, I said that it delighted me in a way that no game console has done in the last 10 – 15 years. And it's true, this is a really, really innovative product and really quite ingenious.
And if you ever have a chance to get it in your hands, it's just really delightful. And so in that case, the opportunity to work on it was just really, really too enticing.
We really wanted to do it, but it always requires deep strategic thought because it took several hundred engineers to work on, and they could be working on something else like all of the major initiatives we have. And so we have to be mindful about the strategic opportunity cost that goes along with these.
But in the case of the Nintendo Switch, it's just a home run. I'm so glad I did it, and it was the perfect collaboration for us..
Your next question comes from the line of Craig Ellis from B. Riley..
Yes, thanks for taking the question and congratulations on the real strong execution. I wanted to follow up on some of the prepared comments on automotive with my first question, and it's this. I think Colette mentioned that there were 225 car and truck development engagements that were underway, up 50% in the last quarter.
The question is, as you engage with those partners, what's NVIDIA finding in terms of the time from engagement to revenue generation? And what are you finding with your hit rate in terms of converting those individual engagements into revenues?.
I know the second one easier. The second one is the revenue contribution is not significant at the time – at this moment. But I expect it to be high, and that's why we're working on it. The 200 developers who are working on the DRIVE PX platform are doing it in a lot of different ways.
And at the core, it's because in the future, every aspect of transportation will be autonomous. And if you think through what's going on in the world, one of the most important and powerful effects that's happening right now is the Amazon effect. We're grabbing our phone, we're buying something, and we expect it to be delivered to us tomorrow.
When you sent up those set of electronic instruction, the next thing that had to happen is a whole bunch of trucks have to move around, and they have to go from trucks to maybe smaller trucks and from smaller trucks to maybe a small van that ultimately delivers it to your house. And so if you will, transportation is the physical Internet.
It's the atomic Internet. It's the molecular Internet of society. And without it, everything that we're experiencing today wouldn't be able to continue to scale.
And so you could imagine everything from autonomous trucks to autonomous cars surely and autonomous shuttles and vans and motorcycles and small pizza delivery robots and drones and things like that. And for a long time, it's going to augment truck drivers and delivery professionals, who quite frankly we just don't have enough of.
The world is just growing too fast in an instant delivery delivered to your home, delivered to you right now phenomenon, and we just don't have enough delivery professionals.
And so I think autonomous capability is going to make it possible for us to take pressure off that system and reduce the amount of accidents and make it possible for that entire infrastructure to be a lot more productive. And so that's one of the reasons why you're seeing so much enthusiasm. It's not just the branded cars.
I think the branded cars get a lot of attention and we're excited about our partnerships there. And gosh, I love driving autonomous cars. But in the final analysis, I think the way to think about the autonomous future is every aspect of mobility and transportation and delivery will have autonomous – will be augmented by AI..
That's very helpful color, Jen-Hsun. The follow-up is related to the data center business, and you provided a lot a very useful customer and other information. My question is higher level.
Given your very unique position in helping to nurture AI for the last many years and your deep insights into the way that customers are adopting this, as investors try and understand the sustainability of recent growth, can you help us understand where you believe AI adoption is overall? And since Colette threw out a baseball comment earlier, if we thought about AI adoption in reference to a nine-inning game, where are we in that nine-inning game?.
Let's see here. It's a great question, and there are a couple ways to come at it. First of all, AI is going to infuse all of software. AI is going to eat software. Whereas Marc [Andreessen] said that software is going to eat the world, AI is going to eat software, and it's going to be in every aspect of software.
Every single software developer has to learn deep learning. Every single software developer has to apply machine learning. Every software developer will have to learn AI. Every single company will use AI. AI is the automation of automation, and it will likely be the transmission.
We're going to for the first time see the transmission of automation the way we're seeing the transmission and wireless broadcast of information for the very first time. I'm going to be able to send you automation, send you a little automation by email. And so the ability for AI to transform industry is well understood now.
It's really about automation of everything, and the implication of it is quite large. We've been using now deep learning – we've been in the area of deep learning for about six years. And the rest of the world has been focused on deep learning for about somewhere between one to two, and some of them are just learning about it.
And almost no companies today use AI in a large way. So on the one hand, we know now that the technology is of extreme value, and we're getting a better understanding of how to apply it. On the other hand, no industry uses it at the moment. The automotive industry is in the process of being revolutionized because of it.
The manufacturing industry will be. Everything in transport will be. Retail, e-tail, everything will be. And so I think the impact is going to be large, and we're just getting started. We're just getting started. Now that's kind of a first inning thing.
The only trouble with a baseball analogy is that in the world of tech, things don't – every inning is not the same. In the beginning the first inning feels like – it feels pretty casual and people are enjoying peanuts. The second inning for some reason is shorter and the third inning is shorter than that and the fourth inning is shorter than that.
And the reason for that is because of exponential growth. Speed is accelerating. And so from the bystanders who are on the outside looking in, by the time the third inning comes along, it's going to feel like people are traveling at the speed of light next to you. If you happen to be on one of the photons, you're going to be okay.
But if you're not on the deep learning train in a couple of two, three innings, it's gone. And so that's the challenge of that analogy because things aren't moving in linear time. Things are moving exponentially..
Your next question comes from the line of Hans Mosesmann with Rosenblatt Securities..
Thank you. Congratulations, guys. Hey, Jen-Hsun, can you give us like a state of the union on process node and technology roadmaps that you guys see? Intel made a pretty nice exposition of where they are in terms of their transistors and so on. So what's your comfort level as you see process technology and your roadmaps for new GPUs? Thank you..
Yes. Hi, Hans. I think there are a couple of ways to think about it. First of all, we know that this is the – we know that some – the world calls it the end of Moore's Law, but it's really the end of two dynamics that has happened.
And one dynamic of course is the end of processor architecture productive innovation, end of instruction-level parallelism advances. The second is the end of Dennard scaling. And the combination of those two things makes it look like it's the end of Moore's Law.
The easy way to think about that is that we can no longer rely – if we want to advance computing performance, we can no longer rely on transistor advances alone. That's one of the reasons why NVIDIA has never been obsessed about having the latest transistors. We want the best transistors. There's no question about it, but we don't need it to advance.
And the reason for that is because we advance computing on such a multitude of levels, all the way from architecture, this architecture we call GPU accelerated computing, to the software stacks on top, to the algorithms on top, to the applications that we work with. We tune it across the top, from top to bottom all the way from bottom to top.
And so as a result, transistors is just one of the 10 things that we use. And like I said, it's really, really important to us. And I want the best, and TSMC provides us the absolute best that we can get, and we push along with them as hard as we can. But in the final analysis, it's one of the tools in the box..
Thank you..
Your next question comes from the line of Joe Moore from Morgan Stanley..
Great, thank you. I've attended GTC the last couple days. I'm really quite impressed by the breadth of presentations and the number of industries you guys are affecting.
And I guess, just on that note, how do you think about segmenting the sales effort? Do you have a healthcare vertical, an avionics vertical, a financial vertical, or is it having the best building blocks and you're letting your customers discover stuff?.
Thanks a lot, Joe. You answered it right there. It's both of those. The first thing is that we have developed platforms that are useful per industry. And so we have a team working with the healthcare industry. We have a team that's working with the Internet service providers. We have a team that's working with the manufacturing industry.
We have a team that's working with the financial services industry. We have a team that's working with media and entertainment and with enterprise, with the automotive industry. And so we have different verticals.
We call them verticals, and we have teams of business development people, developer relations, a computational mathematician that works with each one of the industries to optimize their software for our GPU computing platform. And so it starts with developing a platform stack. Of course, one of our most famous examples of that is our gaming business.
It's just another vertical for us, and it starts with GameWorks that runs on top of GeForce, and it has its own ecosystem of partners. And so that's for each one of the verticals and each one of the ecosystems.
And then the second thing that we do is we have horizontally partner management teams that work with our partners, the OEM partners and the go-to-market partners, so that we could help them succeed.
And then of course, we rely a great deal on the extended salesforce of our partners so they can help to evangelize our computing platform all over the world.
And so it's this mixed approach between dedicated vertical market business development teams as well as a partnership approach to partner with our OEM partners that has really made our business scale so fast..
Great, that's helpful. Thank you. And then the other question I had was regarding Colette's comment that HPC had doubled year on year. Just wondering if you had any comments on what drove that.
And is that an indication of the supercomputer types of businesses, or are there other dynamics in terms of addressing new workloads with HPC products?.
HPC is different than supercomputing. Supercomputing to us is a collection of – not a collection, but is 20 different supercomputing sites around the world. And some of the famous ones, whether it's Oak Ridge or Blue Water at UCSC, you've got TITech in Japan.
There are supercomputing centers that are either national supercomputing centers or they could be public and open supercomputing centers for open science. And so we consider those supercomputing centers. High-performance computing is used by companies who are using simulation approaches to develop products or to simulate something.
It could be scenarios for predicting equity, or for example, as you guys know, Wall Street is the home of some of the largest supercomputing or high-performance computing centers. The energy industry, Schlumberger, for example, is a great partner of ours, and they have a massive, massive high-performance computing infrastructure.
And Procter & Gamble uses high-performance computers to simulate their products. I think last year McDonald's was at GTC, and I hope they come this year as well.
And so I think high-performance computing, another way of thinking about it is that more and more people really have to use simulation approaches for product discovery and product design and product simulation and to stress the products beyond what is possible in a physical way so that they understand the tolerance of the products and make sure they're as reliable as possible..
Your next question comes from the line of Blayne Curtis from Barclays..
Hey, thanks for taking my questions and nice results. Just curious, Jen-Hsun, you've seen a half dozen, dozen private companies going after the desiccant (54:35) silicon Google TPU. I know you felt the comparison to a CPU maybe wasn't fair, but I was just curious your response to these claims of 10x, 100x, 500x performance better than a GPU..
It's not that it's not fair. It's just not right; it's not correct. And so in business, who cares about being fair? And so I wasn't looking for fair; I was just looking for right. And so the data has to be correct. It turns out, and I said earlier that our hyperscale businesses have three different pillars.
There's training, which our GPUs are quite good at. There's cloud service provision, which is a GPU computing architecture opportunity where CUDA is really the reason why people are adopting it and all the applications that have adopted CUDA over the years.
And then there's inferencing, and inferencing is a zero opportunity for us, a zero business for us at the moment. We do 0% of our business in inferencing, and it's 100% on CPUs. And in the case of Google, they did a great thing and built a TPU as an ASIC. And they compared the TPU against one of our older GPUs. And so I published a blog.
I wrote a blog to clarify some of the comparisons, and you can look that up. But the way to think about that is our Pascal was probably approximately twice the performance of the TPU, the first-generation TPU. And it's incumbent upon us to continue to drive the performance of inferencing. This is something that's still kind of new for us.
And tomorrow I'm probably going to say a few words about inferencing and maybe introduce a few ideas, but inferencing is new to us. There are 10 million CPUs in the world in the cloud, and today many of them are running Hadoop and doing queries and looking up files and things like that.
But in the future, the belief is that the vast majority of the world's cloud queries will be inference queries, will be AI queries. Every single query that goes into the cloud will likely have some artificial intelligence network that it processes, and I think that's our opportunity.
We have an opportunity to do inferencing better than anybody in the world, and it's up to us to prove it. At the moment, I think it's safe to say that the P40, the Tesla P40 is the fastest on the planet, period. And then from here on forward, it's incumbent upon us to continue to lean into that and do a better job..
Thanks. And then just moving to the gaming GPU side, I was just wondering if you can just talk about the competitive landscape looking back at the last refresh. And then looking forward into the back half of this year, I think your competitors have a new platform.
I'm just curious as to your thoughts as to how the share worked out on the previous refresh and then the competitiveness into the second half of this year..
My assessment is that the competitive position is not going to change..
That's a short answer, thank you..
Your last question comes from the line of Mitch Steves from RBC..
Hey, guys. Thanks for taking my question. I just have one actually on the gaming side. I remember at CES you had mentioned a leasing model that almost effectively target the low-end consumer of gaming products. So I'm just wondering if that will be some catalyst in the back half.
Or how do we think about gaming working out in terms of both the leasing model and the year-over-year comparison getting a bit difficult?.
Hi, Mitch. I was just talking about that earlier in one of the questions called GeForce NOW. And I announced at CES and I said that right around this time of year we're going to open it up for external beta. We've been running internal beta and closed beta for some time. And so we're looking forward to opening up the external beta.
My expectation is that it's going to be some time as we scale that out. It's going to take several years. I don't think it's something that's going to be an overnight success. And as you know, overnight successes don't happen overnight.
However, I'm optimistic about the opportunity to extend the GeForce platform beyond the gamers that we currently have in our installed base. There are several billion gamers on the planet. And I believe that every human will be a gamer someday, and every human will have some way to enjoy an alternative universe some way someday.
And we would love to be the company that brings it to everybody. And the only way to really do that on a very, very large scale basis and reach all those people is over the cloud. And so I think our PC gaming business is going to continue to be quite vibrant.
It's going to continue to advance, and then hopefully we can overlay our cloud reach on top of that over time..
Got it, thank you..
Thanks a lot, thanks for all the questions today. I really appreciate it. We had another record quarter. We saw growth across our four market platforms. AI is expanding. Data center nearly tripled, large ISP/CSP deployments everywhere. PC gaming is still growing, e-sports, AAA gaming titles fueling our growth there. And we have great games on the horizon.
Autonomous vehicles becoming imperative on all sectors of transportations, as we talked about earlier. We have a great position with our DRIVE AI computing platform. And as Moore's Law continues to slow, GPU accelerated computing is becoming more important than ever, and NVIDIA is at the center of that. Don't miss tomorrow's GTC keynote.
We'll have exciting news to share, next-generation AI, self-driving cars, exciting partnerships, and more. Thanks, everybody..
This concludes today's conference call. You may now disconnect..