image
Technology - Semiconductors - NASDAQ - US
$ 141.98
-3.26 %
$ 3.48 T
Market Cap
66.66
P/E
EARNINGS CALL TRANSCRIPT
EARNINGS CALL TRANSCRIPT 2017 - Q1
image
Executives

Arnab K. Chanda - Senior Director, Head of Investor Relations Colette M. Kress - Chief Financial Officer & Executive Vice President Jen-Hsun Huang - Co-Founder, President, CEO & Director.

Analysts

Vivek Arya - Bank of America Merrill Lynch Mark Lipacis - Jefferies LLC Stephen Chin - UBS Securities LLC Deepon Nag - Macquarie Capital (USA), Inc. Ting Pong Gabriel Ho - BMO Capital Markets (United States) C.J. Muse - Evercore Group LLC Joe L. Moore - Morgan Stanley & Co. LLC Harlan Sur - JPMorgan Securities LLC Ian L.

Ing - MKM Partners LLC Blayne Curtis - Barclays Capital, Inc. Ross C. Seymore - Deutsche Bank Securities, Inc. Craig A. Ellis - B. Riley & Co. LLC Romit J. Shah - Nomura Securities International, Inc. Suji De Silva - Topeka Capital Markets David M. Wong - Wells Fargo Securities LLC.

Operator

Good afternoon. My name is Claudine and I'll be your conference coordinator today. I'd like to welcome everyone to the NVIDIA Financial Results Conference Call. All lines have been placed on mute. After the speakers' remarks, there will be a question-and-answer period. This conference is being recorded Thursday, May 12, 2016.

I would now like to turn the call over to Arnab Chanda, Vice President of Investor Relations at NVIDIA. Please go ahead, sir..

Arnab K. Chanda - Senior Director, Head of Investor Relations

Thank you. Good afternoon, everyone, and welcome to NVIDIA's Conference Call for the First Quarter of Fiscal 2017. With me on the call today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.

I'd like to remind you that today's call is being webcast live on NVIDIA's Investor Relations website. It's also being recorded. You can hear a replay by telephone until the 19th of May 2016. The webcast will be available for replay up until next quarter's conference call to discuss Q2 financial results. The content of today's call is NVIDIA's property.

It cannot be reproduced or transcribed without our prior written consent. During the course of this call, we may make forward-looking statements based on current expectations. These forward-looking statements are subject to a number of significant risks and uncertainties, and our actual results may differ materially.

For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earning release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission.

All our statements are made as of today, the 12th of May 2016, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures.

You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Colette..

Colette M. Kress - Chief Financial Officer & Executive Vice President

gaming, professional visualization, datacenter, and automotive. Year-on-year revenue growth continued to accelerate, increasing 13% to $1.3 billion. Our GPU business grew 15% to $1.08 billion from a year ago. Tegra processor business was up 10% to $160 million. Growth continues to be broad-based across all four platforms.

Record performance in datacenter was driven by the adoption of deep learning across multiple industries. In Q1 our four platforms contributed nearly 87% of revenue, up from 81% a year earlier. They collectively increased 21% year over year. Let's start out with our gaming platform.

Gaming revenue increased 17% year on year to $687 million, momentum carried forward from the holiday season helped by the continued strength of Maxwell-based GTX processors. Last weekend at DreamHack Austin, we unveiled GeForce GTX 1080 and GTX 1070, our first Pascal GPUs for gamers.

They represent a quantum leap for gaming and immersive VR experiences, delivering the biggest performance games from the previous generation architect in a decade. Media reports and gamers have been unanimously enthusiastic.

The Verge wrote, what NVIDIA is doing with its new GTX 1000 series is bringing yesteryear's insane high-ends into 2016's mainstream. We also extended our VR platform by adding spatial acoustics to our VRWorks software development kit, which helps provide an even greater sense of presence within VR.

We introduced simultaneous multi-projection, enabling accurate, efficient projection of the real world to surround monitors, VR headsets as well as feature displays. To showcase these technologies, we created our own amazing Open Source game called NVIDIA VR Funhouse, available on Steam.

In addition, we announced Ansel, an in-game photography system which enables gamers to capture high resolution and VR scenes within their favorite games. Moving to professional visualization. Quadro grew year on year for the second consecutive quarter. Revenue rose 4% to $189 million. Growth came from higher-end products and mobile workstations.

We launched the M6000 24GB and are seeing good success among multiple customers including Toyota and Pixar. Roche is using the M6000 to speed its DNA sequencing pipeline by 8X, enabling more affordable genetic testing.

We see exciting opportunities for our Quadro platform with virtual reality and NVIDIA Iray, a photo-realistic rendering tool that enables designers effectively to walk around their creations and make real-time adjustments. Moving to datacenter.

Revenue was a record $143 million, up 63% year on year and up 47% sequentially, reflecting enormous growth in deep learning. In just a few years, deep learning has moved from academia and is now being adopted across the hyperscale landscape. We expect growing deployment in the coming year among large enterprises.

GPUs have become the accelerator of choice for hyperscale center centers due to their superior programmability, computational performance, and power efficiency. Our Tesla M4 is over 50% more power efficient than other programmable accelerators for applications such as real-time image classification for AlexNet, a deep learning framework.

Hyperscale companies are the fastest adopters of deep learning, accelerating their growth in our Tesla business. Starting from infancy three years ago, hyperscale revenue is now similar to that from high performance computing. NVIDIA GPUs today accelerate every major deep learning framework in the world.

We power IBM Watson and Facebook's Big Sur server for AI, and we're in AI platforms at hyperscale giants such as Microsoft, Amazon, Alibaba and Baidu for both training and real-time inference. Twitter has recently said they use NVIDIA GPUs to help users discover the right content among the millions of images and videos shared every day.

During the quarter, we hosted our seventh annual GPU Technology Conference. The event drew record attendance with 5,500 scientists, engineers, designers and others across a wide range of fields, and featured 600 sessions and 200 exhibitors. At GTC we unveiled the Tesla P100, the world's advanced GPU accelerator based on the Pascal architecture.

The P100 utilizes a combination of technologies including NVLink, a high speed interconnect allowing application performance to scale on multiple GPUs, high memory bandwidth, and multiple hardware features designed to natively accelerate AI applications. The next platform, an enterprise IT site, calls it a beast in all the good sense of that word.

Among the first customers for our Pascal accelerator is the Swiss national computer center, which will use it to double the speed of Europe's fastest supercomputer. At GTC, we also announced the DGX-1, the world's first deep learning supercomputer loaded.

With eight P100s in a single box interconnected with NVLink, it provides the deep learning performance equivalent to 250 traditional servers. DGX-1 comes loaded with a suite of software designed to aid AI and application developers.

Universities, hyperscale vendors, and large enterprises developing AI-based applications are showing strong interest in the system. Among the first to get DGX-1 will be the Massachusetts General Hospital.

It launched an initiative that applies AI techniques to improve the detection, diagnosis, treatment, and management of diseases, drawing on its database of some 10 billion medical images.

In our GRID graphics virtualization business, we're seeing interest across a variety of industries ranging from manufacturing, energy, education, government, and financial services.

Finally, in automotive, revenue continued to grow, reaching $113 million, up 47% year-over-year and up 22% sequentially, reflecting the growing popularity of premium infotainment features in mainstream cars.

NVIDIA's working closely with partners to develop self-driving cars using our end-to-end platform, which starts with Tesla in the datacenter and extends through the deployment with DRIVE PX 2. Since we unveiled DRIVE PX 2 earlier this year, worldwide interest has continued to grow among car makers, Tier 1 suppliers, and others.

We are now collaborating with more than 80 companies using the open architecture of DRIVE PX to develop their own software and driving experiences. At GTC, we demonstrated the world's first self-driving car trained using deep learning and showed its ability to navigate on roads without lane markings even in bad weather.

Additionally, we announced that DRIVE PX 2 will serve as the brain behind the new ROBORACE initiative in the Formula E racing circuit. The circuit will include 10 teams racing identical cars, all using DRIVE PX 2. Beyond our four platforms, our OEM, IP business was $173 million, down 21% year-on-year, reflecting weak PC demand.

Now, turning to the rest of the income statement. We had record GAAP and non-GAAP gross margins for the first quarter at 57.5% and 58.6%, respectively. Driving these margins was the strength of our Maxwell GPUs, the success of our platform approach, and strong demand for deep learning.

GAAP operating expenses for the first quarter were $506 million and declined from $539 million in Q4 on lower restructuring charges. Non-GAAP operating expenses were $443 million, flat sequentially and up 4% from a year earlier, reflecting increased hiring for our growth initiatives and development-related expenses associated with Pascal.

GAAP operating income for the first quarter was $245 million, up 39% from a year earlier. Non-GAAP operating income was $322 million, also up 39%. Non-GAAP operating margins improved more than 470 basis points from a year ago to 24.7%. For the first quarter, GAAP net income was $196 million.

Non-GAAP net income was $263 million, up 41%, fueled by the strong revenue growth and improved gross and operating margins. During the first quarter, we'd entered into a $500 million accelerated share repurchase agreement, and paid $62 million in quarterly cash dividends.

Since the restart of our capital return program in the fourth quarter of fiscal 2013, we've returned over $3.5 billion to shareholders. This represents over 100% of our cumulative free cash flow for fiscal years 2013 through this Q1.

For fiscal 2017, we intend to return approximately $1 billion to shareholders through share repurchases and quarterly cash dividends. Now, turning to the outlook for the second quarter of fiscal 2017. We expect revenue to be $1.35 billion, plus or minus 2%.

Our GAAP and non-GAAP gross margins are expected to be 57.7% and 58.0%, respectively, plus or minus 50 basis points. GAAP operating expenses are expected to be approximately $500 million. Non-GAAP operating expenses are expected to be approximately $445 million.

GAAP and non-GAAP tax rates for the second quarter of fiscal 2017 are both expected to be 20%, plus or minus 1%. Further financial details are included in the CFO commentary and other information available on our IR website. We will now open the call for questions. Operator, could you please poll for questions? Thank you..

Operator

Thank you. And our first question comes from the line of Vivek Arya with Bank of America. Please go ahead..

Vivek Arya - Bank of America Merrill Lynch

Thank you for taking my question, and good job on the results and the guidance. Maybe as my first one, Jen-Hsun, how do you assess the competitive landscape in PC gaming? AMD recently claimed to be taking a lot of share, and they're launching Polaris soon.

Just if you could walk us through what does NVIDIA do better than AMD? So – that helps you maintain your competitive edge in this market, and what impact will Pascal have in that?.

Jen-Hsun Huang - Co-Founder, President, CEO & Director

Yeah, Vivek, thank you. Our PC gaming platform, GeForce, is strong and getting stronger than ever, and I think the reason for that is several, several folds. First of all, our GPU architecture is just superior. We dedicated an enormous amount of effort to advancing our GPU architecture.

I think the engineering of NVIDIA is exquisite, and our craftsmanship is really unrivaled anywhere. The scale of our company in building GPUs is the highest and the largest of any company in the world. This is what we do. This is the one job that we do.

And so, it's not surprising to me that NVIDIA's GPU technology is further ahead than any time in its history. The second thing, however, it's just so much more than just chips anymore, as you know.

Over the last 10 years, we've started to evolve our company to much more of a platform company, and it's about developing all the algorithms that sit on top of our GPUs. A GPU is a general purpose processor.

It's a general purpose processor that's dedicated to a particular field of computing, such as computer graphics here, physics simulation, et cetera.

But the thing that's really important is all of the algorithms that sit on top of it, and we have a really, really fantastic team of computational mathematicians that captures our algorithms and our know-how into GameWorks, into the physics engine, and recently the really amazing work that we're doing in VR that we've embodied into VRWorks.

And then lastly, it's about making sure that the experience always just works. We have a huge investment in working with game developers all over the world from the moment that the game is being conceived of, all the way to the point that it's launched.

And we optimize the games on our platform, we make sure that our drivers work perfectly, and even before a gamer downloads or buys a particular game, we've already updated their software so that it works perfectly when they install the game, and we call that GFE, the GeForce Experience.

And so, Vivek, it's really about a top-to-bottom approach, and I haven't even started talking about all of the marketing work that we do in engaging the developers and engaging the gamers all over the world. This is really a network platform and all of our platform partners that take it – take our platform to market.

And so it's a pretty extensive network and it's a pretty extensive platform and it's so much more than chips anymore..

Vivek Arya - Bank of America Merrill Lynch

Got it. Thank you, Jen-Hsun. And as my follow-up, so it seems like datacenter products were the big upside surprise in Q1, grew over 60% from last year.

Could you give us some more color on what drove that upside? Was it the initial Pascal launch? Is that impact still to come? And just broadly, what trends are you seeing there in HPC versus cloud versus some of these new AI projects that you're involved with?.

Jen-Hsun Huang - Co-Founder, President, CEO & Director

Yeah, thanks. You know that I've been rather enthusiastic about high performance computing for some time. And we've been evolving our GPU platform so that it's better at general purpose computing than ever.

And almost every single datacenter in the world and every single server company in the world are working with us to build servers that are based on GPUs, based on NVIDIA GPUs for high performance computing. One of the most important areas of high performance computing has been this area called deep learning.

And this deep learning – deep learning, as you know, as you probably are starting to hear, is a brand-new computing model that takes advantage of the massively parallel processing capability of a GPU along with the big data that many companies have to essentially have software write algorithms by itself.

Deep learning is a very important field of machine learning, and machine learning is now in the process of revolutionizing artificial intelligence making machines more and more intelligent and using it to discover insight that, quite frankly, isn't possible otherwise.

And so this particular field is – was first adopted by hyperscale companies so that they could find insight and make recommendations and make predictions from the billions of customer transactions they have every day.

Now, it's in the process of moving into enterprises, but in the meantime, hyperscale companies are now in the process of deploying our GPUs in deep learning applications into production. And so we've been talking about this area for some time, and now we're starting to see the broad deployment in production, so we're quite excited about that..

Operator

And our next question comes from the line of Mark Lipacis with Jefferies. Please go ahead..

Mark Lipacis - Jefferies LLC

Thanks for taking my questions. First question, the growth in the Tesla business is impressive. And looking back, it seemed like that business actually decelerated in 2015, which was a headscratcher for me.

And I wonder, do you think that the – your customers in that business paused in anticipation of Pascal, or do you think it's the AI apps and deep learning applications that are just hitting their stride right now?.

Jen-Hsun Huang - Co-Founder, President, CEO & Director

Well, decelerating, I guess I'm not sure I recall that. The thing, though, about HPC, about GPU computing is as you know, this is a new computing model, and we've been promoting this computing model now for close to seven years. And a new computing model doesn't come along very frequently.

In fact, as I know it, I don't know if there's a new computing model that's used anywhere that has been revolutionary in the last 20 years. And so GPU computing took some time to develop. We've been evangelizing it for quite some time. We developed robust tools, so that make it easier for people to take advantage of our GPUs.

We have industry expertise in a large number of industries now. We have APIs that we've created for each one of the industries.

We've been working with the ecosystem in each one of the industries and developers in each one of the industries, and as of this time, we have quite a large handful, quite a large number of industries that we accelerate applications for.

And so I think that – I guess my recommendation – my recollection would be that it has taken a long time, in fact, to have made GPU computing into a major new computing model. But I think at this point, it is pretty clear that it's going mainstream. It is really one of the best ways to achieve the post-Moore's Law era of computing acceleration.

And it's been adopted by (23:19) complications. And the one that – of course, that is a very, very big deal is deep learning and machine learning. This particular field is a brand new way of doing computing for a large number of companies and we're seeing traction all over the place..

Operator

And our next question comes from the line of Stephen Chin with UBS. Please proceed..

Stephen Chin - UBS Securities LLC

Hey, thanks for taking my questions. Jen-Hsun or Colette, first of all, I wanted to see if you could help provide some color on some of the drivers of growth for fiscal 2Q, whether most of it's coming from Pascal possibly, in the gaming market or in the Tesla products, or if there's also some of that growth in Tegra automotive as well for fiscal 2Q..

Jen-Hsun Huang - Co-Founder, President, CEO & Director

Yeah, Stephen. I would expect that all of our businesses grow in Q2. And so it's across the board. We're seeing great traction in gaming. Gaming, as you know, has multiple growth drivers. Partly the gaming is growing because the production value of games is growing, partly because the number of people who are playing is growing.

eSports is more popular than ever. Sports spectatorship is more popular than ever. And so gaming is just a larger and larger market, and it's surprising everybody. And the quality of games is going up, which means that (24:50) to go up. High performance computing has grown, and the killer app is machine learning and deep learning.

And that's going to continue to go in production from the hyperscale companies as we expand our reach into enterprises all over the world now, companies who have a great deal of data that they would like to point insight in. Automotive is growing and we're delighted to see that the enterprise is growing as well..

Stephen Chin - UBS Securities LLC

Great. And as I follow up for Colette, on the gross margin side of things, you guys were guiding margins up nicely for the quarter.

And just kind of wondering, looking out further across the year, whether or not the levers that you have available to you currently, if there's further room for expansion, whether it's from product mix, higher ASPs, and/or maybe even some of the platform-related elements such as software services.

Just kind of wondering, especially on the software side, how much that can continue to help margins from a platform perspective..

Colette M. Kress - Chief Financial Officer & Executive Vice President

Sure. Thanks, Stephen. Yes, our gross margins within the quarter for Q1 did hit record levels just due to very strong mix across our products on the Maxwell side both from a gaming perspective as well as what we have in enterprise for pro visualization and datacenter.

As we look to Q2, a good review of where we also see gross margins and those are looking at a non-GAAP at about 58%. Mix will again be a strong component of that as our launch of Pascal will come out with high-end gaming and with datacenter, and the growth essentially across all of our platforms will help our overall gross margins.

As we go forward, there's still continued work to do. We're here to guide just one quarter out but we do have a large TAM in front of us on many of these different markets, and the mix will certainly help us.

We're in the initial stages of rolling out what we have in software services on our overall systems, so don't expect it to be a material part of the overall gross margin but it will definitely be a great value proposition for us for what we put forth..

Operator

Our next question comes from the line of Deepon Nag with Macquarie. Please proceed..

Deepon Nag - Macquarie Capital (USA), Inc.

Yeah, thanks, guys, and congratulations on the great quarter.

For Q2, could you kind of talk about how much the contribution you expect from Pascal and also maybe give us an update on where you think yield is progressing right now?.

Jen-Hsun Huang - Co-Founder, President, CEO & Director

Yeah. Thanks a lot, Deepon. We're expecting a lot of Pascal. Pascal was just announced for 1080 and 1070, and both of those products are in full production. We're in production with Tesla P100, and so all of our Pascal products that we've announced are in full production, so we're expecting a lot. Yields are good.

And building these semiconductor devices are always hard, but we're very good at it. And this is now a year behind when the first 16-nanometer FinFET products went into production in TSMC. They have yields under great control.

TSMC is the world's best manufacturer of semiconductors, and we work very closely with them to make sure that we're ready for production, and we surely wouldn't have announced it if we didn't have manufacturing under control. So we're in great shape..

Operator

Our next question comes from the line of Ambrish Srivastava with BMO Capital Markets. Please proceed..

Ting Pong Gabriel Ho - BMO Capital Markets (United States)

Hi. This is Gabriel calling in for Ambrish. Thanks for taking the question.

When you recently launched the new GTX GPU product, it looks like your pricing, your MS obviously appears to be higher than your prior generation, and how should we think about your ASP on an even gross margin trend, if you're ramping this product for the rest of the year?.

Jen-Hsun Huang - Co-Founder, President, CEO & Director

Yeah, thanks. The thing that's most important is that the value's greater than ever, and one of the things that we know is games are becoming richer than ever. The production values become richer than ever. And gamers want to play these games with all of the settings maxed out. They would like to play at a very high resolution.

They want to play it at very high frame rates. When I announced the 1080, I was showing all of the latest and most demanding games running at twice the resolution of a game console, at twice the frame rate of a game console, and it was barely even breathing hard.

And so I think one of the most important things is for customers of this segment, they want to buy a product that they can count on and that they can rely on to be ready for future-generation games. And some of the most important future generation games are going be in VR.

And so, the resolution's going to be even higher, the frame rate expectation is 90 hertz, and the latency has to be incredibly low so that you feel a sense of presence. And so, I think the net of it all is that the value proposition we delivered with 1080 and 1070 is just through the roof.

And if you look at the early response on the web and from analysts, they're quite excited about the value proposition that we brought..

Operator

And our next question comes from C.J. Muse with Evercore. Please go ahead..

C.J. Muse - Evercore Group LLC

Yeah, good afternoon. Thank you for taking my question. I guess two questions around the datacenter.

I guess first part, how's the visibility here today? And I guess how do you see perhaps the transition from hyperscale to a ramp in HPC? And then I know you guys don't like to forecast over the next couple quarters, but looking out over the next 12 months to 24 months, this part of your business has grown from 8% to 11% year-over-year.

And, curious, as you look at one year to two years, what do you think this could be as a percentage of your overall company? Thank you..

Jen-Hsun Huang - Co-Founder, President, CEO & Director

Yeah, C.J., thanks a lot. I think a lot – the answer to a lot of your questions is I don't know. However, there are some things I do know very well.

One of the things that we do know is that high-performance computing is an essential and central approach for one of the most important computing models that we know today, which is machine learning and deep learning.

Hyperscale datacenters all over the world is relying on this new model of computing, so that it could harvest, it could study all of the vast amounts of data that they're getting to find insight for individual customers to make the perfect recommendation, predict what somebody would anticipate – would look forward to, in terms of news or products, or whatever it is.

And so this approach of using computing is really unprecedented. And this is a new computing model, and the GPU is really ideal for it. And we've been working on this for – coming up on a decade, and it explains one of the reasons why we have such a great lead in this particular aspect.

The GPU is really the ideal processor for these massively parallel problems, and we've optimized our entire stack of platforms, from the architecture to the design, to the system, to the middleware, to the system software, all the way to the work that we do with developers all over the world, so that we can optimize the entire experience to deliver the best performance.

And so this is something that's taken a long time to do. I've a great deal of confidence that machine learning is not a fad.

I've a great deal of confidence that machine learning is going be the future computing model for a lot of very large and complicated problems, and I think that all of the stories that you see, whether it's the groundbreaking work that's done at Google and Google DeepMind on AlphaGo, to self-driving cars, to the work that people are talking about and artificial intelligence recommendation chatbots to – boy, the list just goes on and on.

And I think that it goes without saying that this new computing model in the last couple of years has really started to deliver very, very promising results. And I would characterize the results as being superhuman results.

And now they're going into production, and we're seeing production deployments not just in one or two customers but basically in every single hyperscale datacenter in the world in every single country. And so, I think this is a very, very big deal. And I don't think it's a short-term phenomenon.

And the amount of data that we process is just going to grow. And so, those are some of the things I do know..

Operator

And our next question comes from the line of Mark Lipacis with Jefferies. Please proceed..

Mark Lipacis - Jefferies LLC

Hi. Thanks for cycling me back in for a follow-up. Sometimes when you introduce a new product, and this is probably for technology, there's kind of a hiccup as the transition happens where the supply chain blows out the older inventory before the new products can ramp in, so people call that an air pocket.

So, I was wondering, is that something that you can manage? How do you try to manage that? Did you account for it when you think about the outlook for this quarter? Thank you..

Jen-Hsun Huang - Co-Founder, President, CEO & Director

Yeah, thanks, Mark. Well, product transitions are always tricky, and we take it very seriously. And there's several things that we do know. We have a great deal of visibility to the channel and so we know how much inventory is where and of which kind. And secondarily, we have perfect visibility into our supply chain.

And both of those matters need to be taken into account when we launch a new product. And so, anything could happen. The fact of the matter is we are in a high-tech business, and high-tech is hard. The work that we do is hard. The team doesn't take it for granted and we're not complacent about our work.

And so, I think that I can't imagine a better team in the world that is to manage this transition. We've managed transitions all the time. And so we don't take it lightly. However, you're absolutely right. I mean, it requires care and the only thing I can tell you is that we're very careful..

Operator

And our next question comes from the line of Joe Moore with Morgan Stanley. Please proceed..

Joe L. Moore - Morgan Stanley & Co. LLC

Great. Thank you.

I guess along the same lines, can you talk a little bit about the Founders Edition of the new gaming products? Is that different from previous reference designs that you've done? Is there any difference in economics to NVIDIA if you sell Founders Edition?.

Jen-Hsun Huang - Co-Founder, President, CEO & Director

The Founders Edition is something we did as a result of demand from the end user base. The Founders Edition is basically designed by – wholly designed by NVIDIA product. A reference design is really not designed to be an end product. It's really designed to be a reference for manufacturers to use as a starting point.

But the Founders Edition is designed so that it could be manufactured, it could be marketed, and customers can continue to buy it from us for as long as they desire. Now, our strategy is to support our global network of add-in card partners and we'll continue do that. We gave them – we gave everybody reference designs like we did before.

And in this particular case, we created the Founders Edition so that people who like to buy directly from us, people who like our industrial design, and people who would like the exquisite design and quality that comes with our products that we can do. And so it's designed to be extremely overclockable.

It's designed with all the best possible components. And if somebody would like to buy products directly from us, they have the ability to do that. I expect that the vast majority of the add-in cards will continue to be manufactured by our add-in card partners, and that's our expectation and that's our hope.

And I don't expect any dramatic change in the amount of shifting of that. So that's basically it. Founders Edition, the most exquisitely engineered add-in card the world's ever seen, directly from NVIDIA..

Operator

And our next question comes from the line of Harlan Sur with JPMorgan. Please go ahead..

Harlan Sur - JPMorgan Securities LLC

Good afternoon, and solid job on the execution. At the recent analyst day, I think the team articulated its exposure to developed and emerging markets, and the unit and ASP growth opportunities around EM.

Just wondering what are the current demand dynamics that you're seeing in emerging markets? Clearly, I think macro-wise they're still pretty weak but, on the flip side, gaming has shown to be fairly macro-insensitive. Would be great to get your views here..

Jen-Hsun Huang - Co-Founder, President, CEO & Director

I think you just said it. Depending on which one of our businesses that you're talking about, gaming is rather macro-insensitive for some reason. People enjoy gaming. Whether the economy is good or not, whether the oil price is high or not, people seem to enjoy gaming.

Don't forget gaming is not something that people do once a month, like going out to a movie theater or something like that. People game every day, and the gamers that use our products are gaming every day. It's their way of engaging with their friends. They hang out with their friends that way. It's platform for chatting.

Don't forget that the number one messaging company in China is actually a gaming company. And the reason for that is because while people are gaming, they're hanging out with their friends and they're chatting with their friends.

And so it's really a medium for all kinds of things, whether it's entertaining or hanging out or expressing your artistic capabilities or whatnot. And so gaming, for one, appears to be doing quite well in all aspects of the market. The second thing is enterprise, however, is largely – or hyperscale is largely a U.S. dynamic.

And the reason for that is because – U.S. dynamic as well as a China dynamic – because that's where most of the world's hyperscale companies happen to be. And then automotive, most of our automotive success to date has been from the European car companies, and we're seeing robust demand from the premium segments of the marketplace.

However, in the future, we're going see a lot more success with automotive here in the United States, here in Silicon Valley. In China we're going to see a lot more global penetration because of our self-driving car platform..

Operator

And our next question comes from the line of Ian Ing with MKM Partners. Please go ahead..

Ian L. Ing - MKM Partners LLC

Yes. Thank you. So for July, it looks like you've got some operating expense discipline. Given some hiring activity in April, you're down sequentially.

Is that related to the timing of some tape-out activity? And as Pascal rolls out, what should the shape of the tape-outs be, do you think, for the upcoming quarters?.

Jen-Hsun Huang - Co-Founder, President, CEO & Director

Well, all of the Pascal chips have been taped out. But we still have a lot of engineering work do. The differences are minor. We're a large company and we have a lot of things that we're doing. I wouldn't overstudy the small deltas in OpEx. We don't manage things a dollar at a time, and we're trying invest in the important things.

On the other hand, this company is really, really good about not wasting money. And so we want to make sure that on the one hand we invest into opportunities that are very important to our company, but we just have a culture of frugality that permeates our company.

And then lastly, from an operational perspective, we've unified everything in our company behind one architecture. And whether you're talking about the cloud or workstations or datacenters or PCs or cars or embedded systems or autonomous machines, you name it, everything is exactly one architecture.

And the benefit of one architecture is that we can leverage one common stack of software. And that base software, it really streamlines our execution. And so it's an incredibly efficient approach for leveraging our one architecture into multiple markets. So those three aspects of how we run the company really helps..

Operator

And our next question comes from the line of Blayne Curtis with Barclays. Please go ahead..

Blayne Curtis - Barclays Capital, Inc.

Hey, guys. Thanks for taking my question and nice results. Just curious. Two questions. Jen-Hsun, you talked about the ramp of deep learning and you kind of talked about that you're going use GPUs for both learning as well as applying the inferences. Just curious, what stages – you mentioned all these customers.

What stages are all these customers? Are they actually deploying it in volume? Are they still more your sales for learning? And then you said all segments up. Just curious, OEMs finally hitting some easy compares.

Is that also going be up year-over-year?.

Jen-Hsun Huang - Co-Founder, President, CEO & Director

I think (43:42).

Colette M. Kress - Chief Financial Officer & Executive Vice President

The OEM business will not be up year-over-year..

Jen-Hsun Huang - Co-Founder, President, CEO & Director

I think OEM business is down year-over-year, isn't it?.

Colette M. Kress - Chief Financial Officer & Executive Vice President

Right. And so on Q2, we'll probably follow along in Q2 along with overall PC demand, which is not expected to grow. So we'll look at that as our side product and probably would not be a growth business in Q2..

Jen-Hsun Huang - Co-Founder, President, CEO & Director

Yeah, so, Blayne, you know that our OEM business is a declining part of our company's overall business, and not to mention that the margins are also significantly below the corporate average. And so that would suggest that it's just increasingly less important part of the way that we go to market.

Now, what I don't mean by that is that we don't partner with the world's large OEMs. HP, Dell, IBM, Cisco, Lenovo, all of the world's large enterprise companies are our partners. We partner with them to take our platforms, our differentiated platforms, our specialty platforms to the world's markets, and most of them are related to enterprise.

We just do less and less volume, high-volume components devices. Generic devices like cell phones that we got out of, generic PCs that we've gotten out of, largely we tend not to do business like that anymore. We intend to focus on our differentiated platforms. Now, you mentioned learning and training and inferencing.

First of all, training is production. You can't train a network just once. You have to train your network all the time. And every single hyperscale company in the world is in the process of scaling out their training because the networks are getting bigger. They want their networks to do even better.

The difference between a 95% accurate network and a 98% accurate network or a 99% accurate network could mean billions of dollars of differences to Internet companies, and so this is a very big deal. And so they want their networks to be larger.

They want to deploy their networks across more applications, and they want to train their network with new data all the time. And so training is a production matter. It is probably the largest HPC high performance computing application on the planet that we know of at the moment.

And so we're scaling – we're ramping up training for production for hyperscale companies. On the other hand, I really appreciate you asking about the inferencing. We recently – well, this year, several months ago we announced the M4, the Tesla M4 that was designed for inferencing.

And it's a little tiny graphics card, a little tiny processor, and it's less than 50 watts. It's called the M4. And at GTC, I announced a brand-new compiler called the GPU Inference Engine, GIE. And GIE recompiles the network that was trained so that they can be optimally inferenced at the lowest possible energy.

And so not only are we already 50 watts, which is low-power, we can also now inference at a higher energy efficiency than any processor that we know of today, better than any CPU by a very, very long shot, better than any FPGA.

And so now hyperscale companies could use our GPUs for both training, and they use exactly the same architecture for inferencing and the energy efficiency is really fantastic. Now, the benefit of using GPU for inferencing is that you're not just trying to inference only. You're trying to oftentimes decode the image or you could be decoding the video.

You inference on it and you might even want to use it for transcoding, which is to re-encode that video and stream it to whoever it is that wants to share live video with.

And so the (48:10) that you want to do on the images and the video and the data is more than just inferencing, and the benefit of our GPU is that it's really great for all of the other stuff, too. And so we're seeing lot of success in M4. I expect M4 to be quite a successful product.

And hyperscale datacenters, my expectation, we'll start to ramp that into production Q2, Q3, Q4 timeframe..

Operator

And our next question comes from the line of Ross Seymore with Deutsche Bank. Please go ahead..

Ross C. Seymore - Deutsche Bank Securities, Inc.

Hi. Thanks for letting me ask a question. On the automotive side, I just wondered – and Colette, in your CFO commentary, you mentioned product development contract as part of the reason it was increasing.

Can you just give us a little bit of indication of what those are? And is the percentage of the revenue coming from those increasing? And then maybe, finally, is that activity indicative of future growth in any way that can be a meaningful for us to track?.

Colette M. Kress - Chief Financial Officer & Executive Vice President

Sure. Thanks for the question. So in our automotive business, there's definitely a process even before we're shipping platforms into the overall cars that we're work jointly with the auto manufacturer start-ups and others on what may be a future product.

Many of those agreements continue, and will likely continue going forward, and that's what you see incorporated in our automotive business. So yes, you'll probably see this continue and go forward. It's not necessarily consistent. It starts in some quarters, bigger in other quarters, but that's what's incorporated in our automotive..

Jen-Hsun Huang - Co-Founder, President, CEO & Director

Colette, let me just add one thing. The thing to remember is that we're not selling chips into a car. We're not selling – you know that DRIVE PX is the world's first autonomous driving car computer that's powered by AI. It's powered by deep learning.

And we're seeing lot of success with DRIVE PX, and as Colette mentioned earlier, there's some 80 companies that we're working with, whether it's tier 1s, or OEMs, or start-up companies all over the world that we're working with in this area of autonomous vehicles. And the thing to realize is you're not selling a chip into that car.

You're working with a car company to build an autonomous driving car. And so that process requires a fair amount of engineering. And so we have a mechanism.

We have a development mechanism that allows car companies to work with our engineers to collaborate to develop these self-driving cars, and that's what most of that stuff that Colette was talking about..

Operator

And our next question comes from the line of Craig Ellis with B. Riley & Company..

Craig A. Ellis - B. Riley & Co. LLC

Thanks for taking the question and congratulations on the revenue and margin performance. Jen-Hsun, I wanted to follow up on one of the comments that you made regarding Pascal. I think you indicated that all Pascal parts had taped out.

So the question is, if that is the case, will we see refresh activity across all of the platform groups in fiscal 2017? Or in fact will some of the refresh activity be taking place in fiscal 2018? So what's the duration of the refresh that we're looking at?.

Jen-Hsun Huang - Co-Founder, President, CEO & Director

Well, first, I – thanks for the question, and we don't comment on unannounced products as you know. I hate to ruin all of the surprises for you.

But Pascal is the single most ambitious GPU architecture we have ever undertaken, and this is really the first GPU that was designed from the ground up for applications that are quite well beyond computer graphics and high-performance computing.

It was designed to take into consideration all of the things that we've learned about deep learning, all of the things that we've learned about VR. For example, it has a brand new graphics pipeline that allows Pascal to simultaneously project into multiple services at the same time with no performance penalty.

Otherwise, it would degrade your performance in VR by a factor of two, just because you have two services you're projecting into. And we can do all kinds of amazing things for augmented reality, other types of virtual reality displays, surround displays, curve displays, dome displays.

I mean, there's all kinds of – holographic displays, there's all kinds of displays that are being invented at the moment, and we have the ability to now support those type of displays with a much more elegant architecture without degrading performance.

So Pascal is – whether it's AI, whether it's gaming, whether it's VR, is really the most ambitious project we've ever undertaken, and it's going go through all of our markets. The application for self-driving cars is going be pretty exciting. It's going go through all of our markets.

And so we're – of course, we have plenty to announce in the future but we've announced what we've announced..

Operator

And our next question comes from the line of Romit Shah with Nomura Research. Please go ahead..

Romit J. Shah - Nomura Securities International, Inc.

Yes. Thanks very much. Jen-Hsun, I was hoping you could just share your view today on fully autonomous driving because Mobileye's chairman has said very recently that the technology basically isn't ready and that fully autonomous cars won't be available until – I think he was saying 2019.

And I guess my question is well, one, I'd love your view on that and, two, whether cars are fully autonomous or autonomous in certain environments, say, one year or two years out, does it impact the trajectory of your automotive business?.

Jen-Hsun Huang - Co-Founder, President, CEO & Director

First of all, to – working on full autonomy is a great endeavor. And whether we get there 100%, 90%, 92%, 93% is in my mind completely irrelevant. The endeavor of getting there and making your car more and more autonomous – initially, of course, we would like to have a virtual co-pilot. Having a virtual co-pilot is the way I get to work every day.

Every single day I drive my Model S and every single day I put it into autonomous mode and every single day it brings me joy. And I'm not confessing necessarily, but texting a little bit is okay. And so I think that the path to full autonomy is going to be paved by amazing capabilities along the way. And so we're not waiting around for 2019.

We'll ship autonomous vehicles by the end of this year. And so, I understand that we're three years ahead of other people's schedules. However, we also know that DRIVE PX 2 is the most advanced autonomous computing – car computer in the world today. And it's powered by AI fully.

And DRIVE PX 2 – there will be a DRIVE PX 3, there will be a DRIVE PX 4, and then by 2019 I guess we'll be shipping DRIVE PX 5. So those – our roadmap is just like that. That's how we work as you guys know very well. And so I think there's a point – there's a lot of work to be done, which is the exciting part.

The thing about a technology company, a thing about any company, unless there's great problems and great challenges that we can help solve, what value do we bring? And what NVIDIA does for a living is to do what – to build computers that no other company in the world can build.

Whether it's high-performance computers that are used to power a nation's supercomputers or deep learning supercomputers so that we can gain insight from data or self-driving car computers so that autonomous cars can save people's lives and make people's lives more convenient, that's what we do.

This is the work that we do and I'm delighted to hear that we're three years ahead of the competition..

Operator

Our next question comes from the line of Suji De Silva with Topeka Capital Markets..

Suji De Silva - Topeka Capital Markets

Hi, Jen-Hsun. Hi, Colette. Congratulations on the impressive results here.

On the datacenter business, is there an inflection going on with deep learning with the software maturity that's driving some at this point? And can you give us any metrics, Jen-Hsun, for how to think about the size of this opportunity for you? I know it's hard but things like server attach rates, what percent of servers you could attach? Will it be an M4 in the high-end in every box, or maybe the number of GPUs a single deep learning implementation has? Something like that; that would help.

Thanks..

Jen-Hsun Huang - Co-Founder, President, CEO & Director

Yes. The truth is that nobody really knows how big this deep learning market is going to be. Until a couple of two years, three years ago, it was really even hard to imagine how good the results were going to be.

And if it wasn't because of the groundbreaking work that was done at Google and Facebook and other researchers around the world, how would we have discovered that it was going to be superhuman? The work that recently was done at Microsoft Research, they've achieved superhuman levels of inferencing that – of image recognition and voice recognition that's really kind of hard to imagine, and these networks are now huge.

The Microsoft research network, superdeep network is 1,000 layers deep. And so training such a network is quite a chore. It is quite an endeavor, and this is a problem that high performance computing will have to be deployed, and this is why our GPUs are so sought after.

In terms of how big that's going be, my sense is at almost no transaction with the Internet will be without deep learning, or some machine learning inference in the future. I just can't imagine that.

There's no recommendation of a movie, no recommendation of a purchase, no search, no image search, no text that won't somehow have passed through some smart chatbot or smartbot, or some machine learning algorithm so that they could make the transaction more – make the inference more – requests more useful to you.

And so I think this is going to be a very big thing, and then on the other hand, the enterprises – we use deep learning all over our company today. And we're not – we had the benefit of being early, because we saw the power of this technology early on. But we're seeing deep learning being used now in medical imaging all over the world.

We're seeing it being used in manufacturing. It's going to be used for scientific computing. More data is generated by high performance computers and supercomputers than just about anything. They generate through simulation. They generate so much data that they have to throw the vast majority of it away. For example, the hadron collider.

Whenever the protons collide, they throw away 99% of the data, and they're able to barely keep up with just that 1%. And so by using machine learning and our GPUs, they could find insight in the rest of the 99%.

So there are just applications go on and on and on, and people are now starting to understand, this deep learning, it really puts machine learning and puts artificial intelligence in the hands of engineers. It's understandable, and that's one of the reasons why it's growing so fast.

And so I don't know exactly how big it's going to be, but here's my proposition. This is going to be the next big computing model. The way that people compute is that in the past, software programmers wrote programs, compiled it, and in the future, we're going to have algorithms write the software for us.

And so that's a very (01:01:09) way of computing, and I think it's a very big deal..

Operator

And our next question comes from the line of David Wong with Wells Fargo. Please go ahead..

David M. Wong - Wells Fargo Securities LLC

Hey, thanks very much.

In automotive, what product are your revenues coming from currently? Is DRIVE PX at all significant? Or are your sales primarily DRIVE PX, or something else?.

Jen-Hsun Huang - Co-Founder, President, CEO & Director

The primary parts of our automotive business today comes from infotainment and the premiere infotainment systems, for example the virtual cockpit that Audi ships, and I – the vast majority of our development projects today come from DRIVE PX (01:01:51) on those projects.

We probably have 10 times as many autonomous driving projects as we have infotainment projects today, and we have a fair number of infotainment projects. And so that gives you a sense of where we were in the past, and where we're going in the future..

Operator

And I'm showing no further questions at this time, Mr. Chanda. Please, I'll turn the call over to you..

Arnab K. Chanda - Senior Director, Head of Investor Relations

We've had a great start to the year with strong revenue growth and profitability. Pascal is a quantum leap in performance for AI, gaming, and VR, and is in full production. Deep learning is springing across every industry, making datacenter our fastest growing business.

With growing worldwide adoption of AI, the arrival of VR and the rise of self-driving cars, we're really excited about the future. Thanks for tuning in..

Operator

Ladies and gentlemen, that concludes today's conference call. We thank you for your participation and we ask that you please disconnect your line. Have a great day, everyone..

ALL TRANSCRIPTS
2024 Q-4 Q-3 Q-2 Q-1
2023 Q-4 Q-3 Q-2 Q-1
2022 Q-4 Q-3 Q-2 Q-1
2021 Q-4 Q-3 Q-2 Q-1
2020 Q-4 Q-3 Q-2 Q-1
2019 Q-4 Q-3 Q-2 Q-1
2018 Q-4 Q-3 Q-2 Q-1
2017 Q-4 Q-3 Q-2 Q-1
2016 Q-4 Q-3 Q-2 Q-1
2015 Q-4 Q-3 Q-2 Q-1
2014 Q-4 Q-3 Q-2 Q-1