Thank you for standing by, and welcome to Intel Corporation’s Second Quarter 2023 Earnings Conference Call. At this time, all participants are in a listen-only mode. After the speakers’ presentation, there will be a question-and-answer session. [Operator Instructions] As a reminder, today’s program is being recorded.
And now I’d like to introduce your host for today’s program, Mr. John Pitzer, Corporate Vice President of Investor Relations..
Thank you, Jonathan. By now, you should have received a copy of the Q2 earnings release and earnings presentation, both of which are available on our investor website, intc.com. For those joining us online, the earnings presentation is also available in our webcast window. I am joined today by our CEO, Pat Gelsinger; and our CFO, David Zinsner.
In a moment, we will hear brief comments from both followed by a Q&A session. Before we begin, please note that today’s discussion does contain forward-looking statements based on the environment as we currently see it, and as such, are subject to various risks and uncertainties.
Our discussion also contains references to non-GAAP financial measures that we believe provide useful information to our investors. Our earnings release, most recent annual report on Form 10-K and other filings with the SEC provide more information on specific risk factors that could cause actual results to differ materially from our expectations.
They also provide additional information on our non-GAAP financial measures, including reconciliations, where appropriate, to our corresponding GAAP financial measures. With that, let me turn things over to Pat..
Thank you, John, and good afternoon, everyone. Our strong second quarter results exceeded expectations on both the top and bottom line, demonstrating continued financial improvement and confirmation of our strategy in the marketplace. Effective execution across our process and product road maps is rebuilding customer confidence in Intel.
Strength in client and data center and our efforts to drive efficiencies and cost savings across the organization all contributed to the upside in the quarter and a return to profitability. We remain committed to delivering on our strategic road map, achieving our long-term goals and maximizing shareholder value.
In Q2, we began to see real benefits from our accelerating AI opportunity. We believe we are in a unique position to drive the best possible TCO for our customers at every node on the AI continuum. Our strategy is to democratize AI, scaling it and making it ubiquitous across the full continuum of workloads and usage models.
We are championing an open ecosystem with a full suite of silicon and software IP to drive AI from cloud to enterprise, network, edge and client across data prep, training and inference in both discrete and integrated solutions.
As we have previously outlined, AI is 1 of our 5 superpowers along with pervasive connectivity, ubiquitous compute, cloud to edge infrastructure and sensing, underpinning a $1 trillion semi industry by 2030.
Intel Foundry Services, or IFS, positions us to further capitalize on the AI market opportunity as well as the growing need for a secure, diversified and resilient global supply chain. IFS is a significant accelerant to our IDM 2.0 strategy, and every day of geopolitical tension reinforces the correctness of our strategy.
IFS expands our scale, accelerates our ramps at the leading edge and creates long tails at the trailing edge. More importantly for our customers, it provides choice, leading edge capacity outside of Asia and at 18A and beyond, what we believe will deliver leadership performance.
We are executing well on our Intel 18A as a key foundry offering and continue to make substantial progress against our strategy. In addition, in July, we announced that Boeing and Northrop Grumman will join the RAMP-C program along with IBM, Microsoft and NVIDIA.
The Rapid Assurance Microelectronics Prototypes-Commercial or RAMP-C, is a program created by the U.S. Department of Defense in 2021 to assure domestic access to next-generation semiconductors, specifically by establishing and demonstrating a U.S.-based foundry ecosystem to develop and fabricate chips on Intel 18A.
RAMP-C continues to build on recent customer and partner announcements by IFS, including MediaTek, ARM and a leading cloud edge and data center solutions provider. We also made good progress on two significant 18A opportunities this quarter.
We are strategically investing in manufacturing capacity to further advance our IDM 2.0 strategy and overarching foundry ambitions while adhering to our Smart Capital strategy. In Q2, we announced an expanded investment to build two leading-edge semiconductor facilities in Germany as well as plans for a new assembling and test facility in Poland.
The building out of Silicon Junction in Magdeburg is an important part of our go-forward strategy. And with our investment in Poland and the Ireland sites, we already operate at scale in the region. We are encouraged to see the passage of the EU Chips bill supporting our building out an unrivaled capacity corridor in Europe.
In addition, a year after being signed into law, we submitted our first application for U.S. CHIPS funding for the on-track construction of our fab expansion in Arizona, working closely with the U.S. Department of Commerce.
It all starts with our process and product road maps, and I am pleased to report that all our programs are on or ahead of schedule. We remain on track to 5 nodes in 4 years and to regain transistor performance and power performance leadership by 2025.
Looking specifically at each node, Intel 7 is done and with the second half launch of Meteor Lake, Intel 4, our first EUV node is essentially complete with production ramping.
For the remaining 3 nodes, I would highlight Intel 3 met defect density and performance milestones in Q2, released PDK 1.1 and is on track for overall yield and performance targets. We will launch Sierra Forest in first half ‘24 with Granite Rapids following shortly thereafter, our lead vehicles for Intel 3.
On Intel 20A, our first node using both RibbonFET and PowerVia, Arrow Lake, a volume client product is currently running its first stepping in the fab.
In Q2, we announced that we will be the first to implement backside power delivery in silicon 2-plus years ahead of the industry, enabling power savings, area efficiency and performance gains for increased compute demands ideal for use cases like AI, CPUs and graphics.
In addition, backside power improves ease of design, a major benefit not only for our own products but even more so for our foundry customers. On Intel 18A, we continue to run internal and external test chips and remain on track to being manufacturing-ready in the second half of 2024.
Just this week, we were pleased to have announced an agreement with Ericsson to partner broadly on their next-generation optimized 5G infrastructure. Reinforcing customer confidence in our road map, Ericsson will be utilizing Intel’s 18A process technology for its future custom 5G SoC offerings. Moving to products.
Our client business exceeded expectations and gained share yet again in Q2 as the group executed well, seeing a modest recovery in the consumer and education segments as well as strength in premium segments where we have leadership performance. We have worked closely with our customers to manage client CPU inventory down to healthy levels.
As we continue to execute against our strategic initiatives, we see a sustained recovery in the second half of the year as inventory has normalized. Importantly, we see the AI PC as a critical inflection point for the PC market over the coming years that will rival the importance of Centrino and Wi-Fi in the early 2000s.
And we believe that Intel is very well positioned to capitalize on the emerging growth opportunity. In addition, we remain positive on the long-term outlook for PCs as household density is stable to increasing across most regions and usage remains above pre-pandemic levels.
Building on strong demand for our 13th Gen Intel processor family, Meteor Lake is ramping well in anticipation of a Q3 PRQ and will maintain and extend our performance leadership in share gains over the last 4 quarters.
Meteor Lake will be a key inflection point in our client processor road map as the first PC platform built on Intel 4, our first EUV node and the first client chiplet design enabled by Foveros advanced 3D packaging technology, delivering improved power, efficiency and graphics performance.
Meteor Lake will also feature a dedicated AI engine, Intel AI Boost.
With AI Boost, our integrated neural VPU, enabling dedicated low-power compute for AI workloads, we will bring AI use cases to life through key experience as people will want a need for hybrid work, productivity, sensing, security and creator capabilities, many of which were previewed at Microsoft’s Build 2023 Conference.
Finally, while making the decision to end direct investment in our Next Unit of Computing or NUC business, this well-regarded brand will continue to scale effectively with our recently announced ASUS partnership. In the data center, our 4th Gen Xeon Scalable processor is showing strong customer demand despite the mixed overall market environment.
I am pleased to say that we are poised to ship our 1 millionth 4th Gen Xeon unit in the coming days. This quarter, we also announced the general availability of 4th Gen Cloud Instances by Google Cloud.
We also saw great progress with 4th Gen AI acceleration capabilities, and we now estimate more than 25% of Xeon data center shipments are targeted for AI workloads.
Also in Q2, we saw a third-party validation from MLCommons when they published MLPerf training performance benchmark data showing that 4th Gen Xeon and Habana Gaudi2 are two strong open alternatives in the AI market that compete on both performance and price versus the competition.
End-to-end AI-infused applications like DeepMind’s AlphaFold and algorithm areas such as Graph Neural Networks show our 4th Gen Xeon outperforming other alternatives, including the best published GPU results.
Our strengthening positioning within the AI market was reinforced by our recent announcement of our collaboration with Boston Consulting Group to deliver enterprise-grade, secure and responsible generative AI, leveraging our Gaudi and 4th Gen Xeon offerings to unlock business value while maintaining high levels of security and data privacy.
Our data center CPU road map continues to get stronger and remains on or incrementally ahead of schedule with Emerald Rapids, our 5th Gen Xeon Scalable set to launch in Q4 of ‘23. Sierra Forest, our lead vehicle for Intel 3 will launch in the first half of ‘24. Granite Rapids will follow shortly thereafter.
For both Sierra Forest and Granite Rapids, volume validation with customers is progressing ahead of schedule. Multiple Sierra Forest customers have powered on their boards and silicon is hitting all power and performance targets. Clearwater Forest, the follow-on to Sierra Forest, will come to market in 2025 and be manufactured on Intel 18A.
While we performed ahead of expectations, the Q2 consumption TAM for servers remained soft on persistent weakness across all segments but particularly in the enterprise and rest of world where the recovery is taking longer than expected across the entire industry.
We see the server CPU inventory digestion persisting in the second half, additionally, impacted by the near-term wallet share focus on AI accelerators rather than general purpose compute in the cloud. We expect Q3 server CPUs to modestly decline sequentially before recovering in Q4.
Longer term, we see AI as TAM expansive to server CPUs and more importantly, we see our accelerated product portfolio is well positioned to gain share in 2024 and beyond. The surging demand for AI products and services is expanding the pipeline of business engagements for our accelerator products, which includes our Gaudi Flex and Max product lines.
Our pipeline of opportunities through 2024 is rapidly increasing and is now over $1 billion and continuing to expand with Gaudi driving the lion’s share.
The value of our AI products is demonstrated by the public instances of Gaudi at AWS and the new commitments to our Gaudi product line from leading AI companies such as Hugging Face and Stability AI in addition to emerging AI leaders, including Indian Institute of Technology, Madras, Pravartak [ph] and Genesis Cloud.
In addition to building near-term momentum with our family of accelerators, we continue to make key advancements in next-generation technologies, which present significant opportunities for Intel.
In Q2, we shipped our test chip, Tunnel Falls, a 12-cubit silicon-based quantum chip which uniquely leverages decades of transistor design and manufacturing investments and expertise.
Tunnel Falls fabrication achieved 95% yield rate with voltage uniformity similar to chips manufactured under the more usual CMOS process, with a single 300-millimeter wafer providing 24,000 quantum dot test chips.
We strongly believe our silicon approach is the only path to true cost-effective commercialization of quantum computing, a silicon-based cubit approach is 1 million times smaller than alternative approaches. Turning to PSG, NEX and Mobileye, demand trends are relatively stronger across our broad-based markets like industrial, auto and infrastructure.
Although as anticipated, NEX did see a Q2 inventory correction, which we expect to continue into Q3.
In contrast, PSG, IFS and Mobileye continue on a solid growth trajectory, and we see the collection of these businesses in total growing year-on-year in calendar year ‘23, much better than third-party expectations for a mid-single-digit decline in the semiconductor market, excluding memory.
Looking specifically at our Programmable Solutions Group, we delivered record results for a third consecutive quarter. In Q2, we announced the Intel Agilex 7 with the R-Tile chiplet is shipping production qualified devices in volume to help customers accelerate workloads with seamless integration and the highest bandwidth processor interfaces.
We have now PRQed 11 of the 15 new products we expected to bring to market in calendar year ‘23. For NEX, during Q2, Intel, Ericsson and HPE successfully demonstrated the industry’s first vRAN solution running on the 4th Gen Intel Xeon Scalable processor with Intel vRAN Boost.
In addition, we will enhance the collaboration we announced at Mobile World Congress to accelerate industry-scale open RAN, utilizing standard Intel Xeon-based platforms as telcos transform to a foundation of programmable, software-defined infrastructure.
Mobileye continued to generate strong profitability in Q2 and demonstrated impressive traction with their advanced product portfolio by announcing a supervision eyes-on hands-off design win with Porsche and the mobility-as-a-service collaboration with Volkswagen Group that will soon begin testing in Austin, Texas.
We continue to drive technical and commercial engagement with them, co-developing leading FMCW, LiDAR products based on Intel silicon photonics technology and partnering to drive the software-defined automobile vision that integrates Mobileye’s ADAS technology with Intel’s Cockpit offerings.
Additionally, in the second quarter, we executed the secondary offering that generated meaningful proceeds as we continue to optimize our value creation efforts.
In addition to executing on our process and product road maps during the quarter, we remain on track to achieve our goal of reducing costs by $3 billion in 2023 and $8 billion to $10 billion exiting 2025.
As mentioned during our internal foundry webinar, our new operating model establishes a separate P&L for our manufacturing group, inclusive of IFS and TD, which enables us to facilitate and accelerate our efforts to drive best-in-class cost structure, derisk our technology for external foundry customers and fundamentally changes incentives to drive incremental efficiencies.
We have already identified numerous gains in efficiency, including factory loading, test and sort time reduction, packaging cost improvements, litho field utilization improvements, reductions in staffing, expedites and many more.
It is important to underscore the inherent sustained value creation due to the tight connection between our business units and TD, manufacturing and IFS.
Finally, as we continue to optimize our portfolio, we agreed to sell a minority stake in our IMS Nanofabrication business to Bain Capital who brings a long history of partnering with companies to drive growth and value creation.
IMS has created a significant market position with multi-beam mask writing tools that are critical to the semiconductor ecosystem for enabling EUV technology and is already providing benefit on our 5 nodes, 4 years efforts. Further, this capability becomes even more critical with the adoption of high NA EUV in the second half of the decade.
As we continue to keep Moore’s Law alive and very well, IMS is a hidden gem within Intel, and the business’s growth will be exposed and accelerated through this transaction. While we still have work to do, we continue to advance our IDM 2.0 strategy. 5 nodes in 4 years remains well on track, our product execution and road map is progressing well.
We continue to build out our foundry business, and we are seeing early signs of success as we work to truly democratize AI from cloud to enterprise, network, edge and client.
We also saw strong momentum on our financial discipline and cost savings as we return to profitability, are executing our internal foundry model by 2024, and are leveraging our Smart Capital strategy to effectively and efficiently position us for the future. With that, I will turn it over to Dave..
Thank you, Pat, and good afternoon, everyone. We drove stronger than expected business results in the second quarter, comfortably beating guidance on both the top and bottom line. While we expect continued improvement to global macroeconomic conditions, the pace of recovery remains moderate.
We will continue to focus on what we can control, prioritizing investments critical to our IDM 2.0 transformation, prudently and aggressively managing expenses near term, and driving fundamental improvements to our cost structure longer term. Second quarter revenue was $12.9 billion, more than $900 million above the midpoint of our guidance.
Revenue exceeded our expectations in CCG, DCAI, IFS and Mobileye, partially offset by continued demand softness and elevated inventory levels in the network and edge markets, which impacted NEX results. Gross margin was 39.8%, 230 basis points better than guidance on stronger revenue.
EPS for the quarter was $0.13, beating guidance by $0.17 as our revenue strength, better gross margin and disciplined OpEx management resulted in a return to profitability. Q2 operating cash flow was $2.8 billion, up $4.6 billion sequentially.
Net inventory was reduced by $1 billion or 18 days in the quarter, and accounts receivable declined by $850 million or 7 days as we continue to focus on disciplined cash management. Net CapEx was $5.5 billion, resulting in an adjusted free cash flow of negative $2.7 billion, and we paid dividends of $0.5 billion in the quarter.
Our actions in the last few weeks, the completed secondary offering of Mobileye shares, and the upcoming investment in our IMS Nanofabrication business by Bain Capital will generate more than $2.4 billion of cash and help to unlock roughly $35 billion of shareholder value.
These actions further bolster our strong balance sheet and investment-grade profile with cash and short-term investments of more than $24 billion exiting Q2. We’ll continue to focus on avenues to generate shareholder value from our broad portfolio of assets in support of our IDM 2.0 strategy. Moving to second quarter business unit results.
CCG delivered revenue of $6.8 billion, up 18% sequentially and ahead of our expectations for the quarter as the pace of customer inventory burn slowed. As anticipated, we see the market moving toward equilibrium and expect shipments to more closely align to consumption in the second half.
ASPs declined modestly in the quarter due to higher education shipments and sell-through of older inventory.
CCG showed outstanding execution in Q2, generating operating profit of $1 billion, an improvement of more than $500 million sequentially on higher revenue, improved unit costs and reduced operating expenses, offsetting the impact of pre-PRQ inventory reserves in preparation for the second half launch of Meteor Lake.
DCAI revenue was $4 billion, ahead of expectations and up 8% sequentially, with the Xeon business up double digits sequentially. Data center CPU TAM contracted meaningfully in the first half of ‘23.
And while we expect the magnitude of year-over-year declines to diminish in the second half, a slower-than-anticipated TAM recovery in China and across enterprise markets has delayed a return of CPU TAM growth.
CPU market share remained relatively stable in Q2 and the continued ramp of Sapphire Rapids contributed to CPU ASP improvement of 3% sequentially and 17% year-over-year. DCAI had an operating loss of $161 million, improving sequentially on higher revenue and ASPs and reduced operating expenses.
Within DCAI, our FPGA products delivered a third consecutive quarter of record revenue, up 35% year-over-year, along with another record quarterly operating margin. We expect this business to return to more natural demand profile in the second half of the year as we work down customer backlog to normalized levels.
NEX revenue was $1.4 billion, below our expectations in the quarter and down significantly in comparison to a record Q2 ‘22. Network and edge markets are slowly working through elevated inventory levels. Elongated by sluggish China recovery, and telcos have delayed infrastructure investments due to macro uncertainty.
We see demand remaining weak through at least the third quarter. Q2 NEX operating loss of $187 million improved sequentially on lower inventory reserves and reduced operating expenses. Mobileye continued to perform well in Q2.
Revenue was $454 million, roughly flat sequentially and year-over-year with operating profit improving sequentially to $129 million. This morning, Mobileye increased fiscal year 2023 outlook for adjusted operating income by 9% at the midpoint.
Intel Foundry Services revenue was $232 million, up 4x year-over-year and nearly doubling sequentially on increased packaging revenue and higher sales of IMS Nanofabrication tools. Operating loss was $143 million with higher factory start-up costs offsetting stronger revenue.
Q2 was another strong quarter of cross-company spending discipline with operating expenses down 14% year-over-year. We’re on track to achieve $3 billion of spending reductions in ‘23.
With the decision to stop direct investment in our client NUC business earlier this month, we have now exited 9 lines of business since Pat rejoined the Company with a combined annual savings of more than $1.7 billion.
Through focused investment prioritization and austerity measures in the first half of the year, some of which are temporary in nature, OpEx is tracking a couple of hundred million dollars better than our $19.6 billion ‘23 committed goal. Now turning to Q3 guidance. We expect third quarter revenue of $12.9 billion to $13.9 billion.
At the midpoint of $13.4 billion, we expect client CPU shipments to more closely match sell-through. Data center, network and edge markets continue to face mixed macro signals and elevated inventory levels in the third quarter, while IFS and Mobileye are well positioned to generate strong sequential and year-over-year growth.
We’re forecasting gross margin of 43%, a tax rate of 13% and EPS of $0.20 at the midpoint of revenue guidance. We expect sequential margin improvement on higher sales and lower pre-PRQ inventory reserves.
While we’re starting to see some improvement in factory under load charges, most of the benefit will take some time to run through inventory and positively impact cost of sales.
Investment in manufacturing capacity continues to be guided by our Smart Capital framework, creating flexibility through proactive investment in shells and aligning equipment purchases to customer demand.
In the last few weeks, we have closed agreements with governments in Poland and Germany, which include significant capital incentives, and we’re well positioned to meet the requirements of funding laid out by the U.S. CHIPS Act.
Looking at capital requirements and offsets made possible by our Smart Capital strategy, we expect net capital intensity in the mid-30s as a percentage of revenue across ‘23 and ‘24 in aggregate.
While our expectations for gross CapEx have not changed, the timing of some capital assets is uncertain and could land in either ‘23 or ‘24, depending on a number of factors.
Having said that, we’re confident in the level of capital offsets we will receive over the next 18 months and expect offsets to track to the high end of our previous range of 20% to 30%. Our financial results in Q2 reflect improved execution and improving macro conditions.
Despite a slower-than-expected recovery in key consumption markets like China and the enterprise, we maintain our forecast of sequential revenue growth throughout the year.
Accelerating AI use cases will drive increased demand for compute across the AI continuum, and Intel is well positioned to capitalize on the opportunity in each of our business units.
We remain focused on the execution of our near- and long-term product, process and financial commitments and the prioritization of our owner’s capital to generate free cash flow and create value for our stakeholders. With that, let me turn the call back over to John..
Thank you, Dave. We will now transition to the Q&A portion of our earnings presentation. As a reminder, we would ask each of you to ask one question with a brief follow-up question where appropriate.
With that, Jonathan, can we have the first caller please?.
Certainly. And our first question comes from the line of Ross Seymore from Deutsche Bank..
Hi, guys. Thanks for letting me ask a question. Congrats on the strong results. I wanted to focus, Pat, on the data center, the DCAI side of things. Strong upside in the quarter but it sounds like there’s still some mix trends going forward. So, I guess a two-part question.
Can you talk about what drove the upside and where the concern is going forward? And part of that concern, that crowding out potential that you just discussed with, accelerators versus CPUs, how is that playing out and when do you expect it to end?.
Yes. Thanks, Ross, and thanks for the congrats on the quarter as well. I’m super proud of my team for the great execution this quarter, top, bottom line, beats, raise, just great execution across every aspect of the business, both financially as well as road map execution.
With regard to the data center, obviously, the good execution, I’ll just say we executed well, winning designs, fighting hard in the market, regaining our momentum, good execution. As you said, we’ll see the Sapphire Rapids at the 1 millionth unit in the next couple of days, our Xeon Gen 4. So overall, it’s feeling good.
Road map’s in very good shape, so we’re feeling very good about the future outlook of the business as well. As we look to 5th Gen E-core, P-core with Sapphire and Granite Rapids, so all of those, I’ll just say we’re performing well. That said, we do think that the next quarter, at least, will show some softness.
There’s some inventory burn that we’re still working through. We do see that big cloud customers, in particular, have put a lot of energy into building out their high-end AI training environments. And that is putting more of their budgets focused or prioritized into the AI portion of their build-out.
That said, we do think this is a near term, right, surge that we expect will balance over time. We see AI as a workload, not as a market, right, which will affect every aspect of the business, whether it’s client, whether it’s edge, whether it’s standard data center, on-premise enterprise or cloud.
We’re also seeing that Gen 4 Xeon, and we’ll be enhancing that in the future road map has significant AI capabilities. And as you heard in the prepared remarks, we expect about 25% today and growing of our Gen 4 is being driven by AI use cases.
And obviously, we’re going to be participating more in the accelerator portion of the market with our Gaudi, Flex and MAX product lines. Particularly, Gaudi is gaining a lot of momentum. In my formal remarks, we said we now have over $1 billion of pipeline, 6x in the last quarter. So, we’re going to participate in the accelerator portion of it.
We’re seeing real opportunity for the CPU as that workload balances over time between CPU and accelerator. And obviously, we have a strong position to democratize AI across our entire portfolio of products..
Ross, do you have a quick follow-up?.
I do. I just want to pivot to Dave on a question on the gross margin side. Nice beat in the quarter and the sequential increase for the third quarter as well.
Beyond the revenue increase side, which I know is important, can you just walk us through some of the pluses and minuses sequentially into the third quarter and even into the back half, some of the pre-PRQ reversals under utilization? Any of those kind of idiosyncratic blocks that we should be aware of as we think about the gross margin in the second half of the year?.
one, we get that period charge for some of our underloading, but some of our underloading is actually just a function of the cost of the inventory. And so that takes some time to flow through. So, it will be more -- it will be a modest decline but nevertheless helpful on the gross margin front.
And then as you point out, we will have pre-PRQ reserves in the third quarter but they’re meaningfully down from the second quarter. Meteor Lake will not be a pre-PRQ reserve in the third quarter because we expect to launch that this quarter.
But we have Emerald Rapids, that will certainly have some impact and then some of the other SKUs will also impact it. So, coming down but not to zero.
So we have an opportunity actually to perform better in the fourth quarter, obviously dependent on the revenue and so forth, given that pre-PRQ reserves are likely to come off again in the fourth quarter. We should improve on the loading front in the fourth quarter as well. So, there’s some, I think, some good tailwinds on the gross margin front.
I’ll just take an opportunity to talk longer term. We will continue to be weighed down for some quarters on underload because of the nature of just having it cycle through inventory and then come out through cost of sales. So for multiple quarters, we’ll have some underloading charges that we’ll see.
And then as we talked about for -- since really, Pat joined and we kind of launched into the 5 nodes in 4 years, we’re going to have a significant amount of startup costs that will hit gross margins that will affect us for a couple of years. But we’re really optimistic about where gross margins are going over the long term.
Ultimately, we will get back to process parity and leadership and that will enable us to not have these start-up costs be a headwind. And of course, as you bring out products at a high performance in terms of process and in terms of product, that shows up in terms of our margins.
And then, as Pat mentioned, he went through a laundry list in the prepared remarks of areas of benefit that the internal foundry model will give us. We expect a pretty meaningful amount of that to come out in -- by the time we hit 2026. But we won’t be done there.
I mean, I think there will be multiple opportunities over the course of multiple years to improve the gross margin. So, Pat has talked about a pretty significant improvement in gross margins over time. And I think what we’re seeing today is the beginnings of seeing that improvement show up in the P&L..
And our next question comes from the line of Joe Moore from Morgan Stanley..
Dave, I think you said in your prepared remarks that data center pricing was up 17% year-on-year and that Sapphire Rapids was a factor there. Can you just talk to that? And kind of obviously, Sapphire Rapids is going to get bigger.
Can you talk about what you expect to see with platform cost in DCAI?.
Platform costs, okay. Well, first of all, ASP is obviously improving as we increase core count and as we get more competitive on the product offerings, that enables us to have more confidence in the market in terms of our pricing. So that’s certainly helpful. Obviously, with the increase in core count, that affects the cost as well.
So, cost obviously goes up. But the longer -- the larger drivers of our cost structure will be around what we do in terms of the internal foundry model as we get up in terms of scale and get away from these underloading charges, and as we get past the start-up costs on 5 nodes in 4 years, which data center is certainly getting hit with.
And so those things, I think, longer term will be the biggest drivers of gross margin improvement.
And as we get -- launch Sierra Forest in the first half of next year and Granite later thereafter and start to produce products on the data center side that are really competitive, that enables us to even be stronger in terms of our margin outlook and should help improve the overall P&L of data center..
Joe, do you have a follow-up question?.
Sure. Just also on servers, as you look to Q3, I think you talked about some of the cautious trends there.
Can you talk to enterprise versus cloud? Is it different between the two? And also, do you see -- are you seeing anything different in China for data center versus what you’re seeing in North America?.
Yes. And as we said, Joe, -- thanks for the question. As we said in the prepared remarks, we do expect to be seeing the TAM down in Q3, somewhat driven by all of it. It’s a little bit of data center digestion for the cloud guys, a bit of enterprise, so weakness, and some of that is more inventory.
And the China market, I think, has been well reported, hasn’t come back as strongly as people would have expected overall. And then the last factor was one of the first question from Ross around the pressure from accelerator spend being stronger. So, I think those four somewhat together, right, are leading to a bit of weakness, at least through Q3.
That said, our overall position is strengthening and we’re seeing our products improve, right? We’re seeing the benefits of the AI capabilities and our Gen 4 and beyond products improving.
We’re also starting to see some of the use cases like Graph Neural Networks, Google’s AlphaFold showing best results on CPUs as well, which is increasingly gaining momentum in the industry as people look for different aspects of data preparation, data processing, different innovations in AI.
So all of that taken together, we feel optimistic about the long-term opportunities that we have in data center, and of course, the strengthening accelerator road map of Gaudi2, Gaudi3, Falcon Shores being now well executed. Also, our first wafers are in hand for Gaudi3.
So, we see a lot of long-term optimism even as near term, we’re working through some of the challenging environments of the market not being as strong as we would have hoped..
And our next question comes from the line of C.J. Muse from Evercore ISI..
I guess, first question, in your prepared remarks, you talked about AI being a TAM expander for servers.
And I guess, I was hoping you could elaborate on that, given the productivity gains through acceleration, would love to hear why you think that will grow units, and particularly, if you could bifurcate your commentary across both training and inference..
Yes. And thanks, C.J. And generally, there are great analogies here that from history we point to.
Cases like virtualization was going to destroy the CPU TAM and then ended up driving new workloads, right? If you think about a DGX platform, the leading edge AI platform, it includes CPUs, right? Why? Head nodes, data processing, data prep, dominate certain portions of the workload.
We also see, as we said, AI as a workload where you might spend 10 megawatts and months training a model but then you’re going to use it very broadly for inferencing. We do see with Meteor Lake ushering in the AI PC generation, where you have tens of watts being responding in a second or two.
And then, AI is going to be in every hearing aid in the future, including mine, where it’s 10 microwatts and instantaneous. So, we do see as AI drives workloads across the full spectrum of applications.
And for that, we’re going to build AI into every product that we build, whether it’s a client, whether it’s an edge platform for retail and manufacturing and industrial use cases, whether it’s an enterprise data center where they’re not going to stand up a dedicated 10-megawatt farm, but they’re not going to move their private data off-premises, right, and use foundational models that are available in open source as well as in the big cloud and training environments as well.
We firmly believe this idea of democratizing AI, opening the software stack, creating and participating with this broad industry ecosystem that’s emerging. It was a great opportunity and one that Intel is well positioned to participate in. We’ve seen that the AI TAM, right, is part of the semiconductor TAM.
We’ve always described this trillion-dollar semiconductor opportunity and AI being one of those superpowers, as I call it, of driving it. But it’s not the only one and one that we’re going to participate in broadly across our portfolio..
C.J., do you have a follow-up question?.
Yes, please. You talked a little bit about 18A and backside power. Would love to hear what you’re seeing today in terms of both, scaling and power benefits and how your potential foundry customers are looking at that technology in particular..
Yes. Thank you. And we continue to make good progress on our 5 nodes in 4 years. And with that, that culminates in 18A. And 18A is proceeding well and we got a particularly good response this quarter to PowerVia, the backside power that we believe is a couple of years ahead as the industry measures it against any other alternative in the industry.
We’re very affirmed by the Ericsson announcement, which is reinforcing the strong belief they have in 18A.
But over and above that, I mentioned in the prepared remarks the two major significant opportunities that we made very good progress on as a big 18A foundry customers this quarter and an overall growing pipeline of potential foundry customers, test chips and process as well. So we feel 5 nodes in 4 years is on track.
18A is the culmination of that and good interest from the industry across the board.
I’d also say that as part of the overall strength in the foundry business as well and maybe tying the first part and the second part of your question together is that our packaging technologies are particularly interesting in the marketplace, an area that Intel never stumbled, right? This is an area of sustained leadership that we’ve had.
And today, many of the big AI machines are packaging limited. And because of that, we’re finding a lot of interest for our advanced packaging, and this is an area of immediate strength for the foundry business.
We set up a specific packaging business unit within our foundry business and finding a lot of great opportunities for us to pursue there as well..
And our next question comes from the line of Timothy Arcuri from UBS..
First, Dave, I had one for you. If I look at the third-party contributions, they were down a little bit, which was a little bit of a surprise. But you did say that the Arizona fab is on track. Can you sort of talk about that? And I know last quarter, you said gross CapEx would be first half weighted and the offsets would be back half weighted.
Is that still the case?.
Yes. So we did manage CapEx a bit better than I was hoping. We thought it would be more front-end loaded. It’s looking -- look it’s going to be a lot more evenly distributed first half versus the second half. And we managed CapEx in particular this quarter really well, which I think obviously helped on the free cash flow side.
It’s kind of a when you manage the CapEx, you get less offsets. And so that kind of drove the lower capital offsets for the quarter. But for the year, we’re still on track to get the same amount of capital offsets through SCIP that we had anticipated. And that’s really where most of the capital offsets have come so far.
Now obviously, as we get into CHIPS incentives that should be coming here in the not-too-distant future, that will add to the offsets that we get. We go into next year, we start getting the investment tax credit that will help on the capital offsets. So, there’ll be more things that come in the future.
But right now, it’s largely SCIP and it’s SCIP1, and that’s a function of where the spending lands quarter-to-quarter..
Yes. And just maybe to pile on to that a bit, obviously, getting EU Chips Act approved, we’re excited about that for the Germany and Poland projects, which are -- will go for formal DG comp approval. We’re also very happy, we submitted our first proposal, the on-track Arizona facility, but we’ll have 3 more proposals going in for U.S.
CHIPS Act this quarter. And so we’re now at pace for those. So everything there is feeling exactly as we said it would and super happy with the great engagement, both in Europe as well as with the U.S. Department of Commerce as we’re working on those application processes..
Tim, do you have a follow-up question?.
I do. Yes, Pat. So, you talked about an accelerated pipeline of more than $1 billion. And I think Sandra has been recently implying that you could do over $1 billion in Gaudi next year.
So, the question is, is that the commitment? And then also at the Data Center Day, you had talked about merging the GPU and the Gaudi road maps into Falcon Shores but that’s not going to come out until 2025. So, the question really there is wondering where that leaves customers in terms of their commitment to your road map, given those changes..
Yes. And let me take that and Dave can add. Overall, as we said, the accelerator pipeline is now well over $1 billion and growing rapidly, about 6x this past quarter. That’s led by but not exclusively Gaudi but also includes the Max and Flex product lines as well. But the lion’s share of that is Gaudi. Gaudi2 is shipping volume product today.
Gaudi3 will be the volume product for next year and then Falcon Shores in ‘25, and we’re already working on Falcon Shores 2 for ‘26. So, we have a simplified road map as we bring together our GPU and our accelerators into a single offering.
But the progress that we’re making with Gaudi2, it becomes more generalized with Gaudi3, the software stack, our One API approach that we’re taking will give customers confidence that they have forward compatibility into Gaudi3 and Falcon Shores. And we’ll just be broadening the flexibility of that software stack. We’re adding FP8.
We just added PyTorch 2 support. So every step along the way, it gets better and broader use cases. More language models are being supported. More programmability is being supported in the software stack. And we’re building that full, right, solution set as we deliver on the best of GPU and the best of matrix acceleration in the Falcon Shores time line.
But every step along the way, it just gets better. Every software release gets better. Every hardware release gets better along the way to cover more of the overall accelerator marketplace. And as I said, we now have Gaudi3 wafers. First ones are in hand, so that program is looking very good.
And with this rapidly accelerating pipeline of opportunity, we expect that we’ll be giving you very positive updates there in the future with both customers as well as expanded business opportunities..
And our next question comes from the line of Ben Reitzes from Melius Research..
Pat, you caught my attention with your comment about PCs next year or with AI having a Centrino moment. Do you mind just talking about that? And what -- when Centrino took place, it was very clear we unplugged from the wires and investors really grasped that.
What is the aha moment with AI that’s going to accelerate the client business and benefit Intel?.
Yes. And I think the real question is what applications are going to become AI-enabled? And today, you’re starting to see that people are going to the cloud and goofing around with ChatGPT, writing a research paper and that’s like super cool, right? And kids are, of course, simplifying their homework assignments that way.
But you’re not going to do that for every client becoming AI-enabled. It must be done on the client for that to occur, right? You can’t go to the cloud. You can’t round trip to the cloud.
All of the new effects, real-time language translation in your Zoom calls, real-time transcription, automation, inferencing, relevance portraying, generated content and gaming environments, real-time creator environments being done through Adobes and others that are doing those as part of the client, new productivity tools being able to do local legal brief generations on clients, one after the other, right, across every aspect of consumer, of developer and enterprise efficiency use cases.
We see that there’s going to be a raft of AI enablement and those will be client-centered. Those will also be at the edge. You can’t round trip to the cloud. You don’t have the latency, the bandwidth or the cost structure to round trip, let’s say, inferencing in a local convenience store to the cloud. It will all happen at the edge and at the client.
So with that in mind, we do see this idea of bringing AI directly into the client immediately, right, which we’re bringing to the market in the second half of the year, is the first major client product that includes native AI capabilities, the neural engine that we’ve talked about. And this will be a volume delivery that we will have.
And we expect that Intel is the volume leader for the client footprint, is the one that’s going to truly democratize AI at the client and at the edge. And we do believe that this will become a driver of the TAM because people will say, "Oh, I want those new use cases.
They make me more efficient and more capable, just like Centrino made me more efficient because I didn’t have to plug into the wire, right? Now, I don’t have to go to the cloud to get these use cases. I’m going to have them locally on my PC in real time and cost effective.
We see this as a true AI PC moment that begins with Meteor Lake in the fall of this year..
Ben, do you have a follow-up question, please?.
Yes. I wanted to double click on your sequential guidance in the client business. There’s some concerns out there with investors that there was some demand pull-in, in the second quarter, given some comments from some others.
And just wanted to talk about your confidence for sequential growth in that business based on what you’re seeing and if there was any more color there..
Yes. Let me start on that and Dave can jump in. The biggest change quarter-on-quarter that we see is that we’re now at healthy inventory levels. And we worked through inventory Q4, Q1 and some in Q2. We now see the OEMs and the channel at healthy inventory levels.
We continue to see solid demand signals for the client business from our OEMs and even some of the end-of-quarter and early quarter sell through are clear indicators of good strength in that business. And obviously, we combine that with gaining share again in Q2.
So, we come into the second half of the year with good momentum and a very strong product line. So, we feel quite good about the client business outlook..
I’d just add, normally over the last few quarters, you’ve seen us identify in the 10-Q strategic sales that we’ve made where we’ve negotiated kind of attractive deals which have accelerated demand, let’s call it.
When you look at our 10-Q, which will either be filed late tonight or early tomorrow, you’ll see that we don’t have a number in there for this quarter, which is an indication of how little we did in terms of strategic purchases. So to your question of did we pull in demand, I think that’ll probably give you a pretty good assessment of that..
And our next question comes from the line of Srini Pajjuri from Raymond James..
Pat, I have a question on AI as it relates to custom silicon. It’s great to see that you announced a customer for 18A on custom silicon. But there’s a huge demand, it seems like, for custom silicon on the AI front. I think some of your hyperscale customers are already successfully using custom silicon for -- as an AI accelerator.
So, I’m just curious what your strategy for that market is.
Is that a focus area for you? If so, do you have any engagements with customers right now?.
Yes. Thank you, Srini. And the simple answer is yes, and I have multiple ways to play in this market. Obviously, one of those is foundry customers. We have a good pipeline of foundry customers for 18A, foundry opportunities.
And several of those opportunities that we’re investigating are exactly what you described, people looking to do their own unique versions of their AI accelerator components, and we’re engaging with a number of those. But some of those are going to be variations of Intel standard products.
And this is where the IDM 2.0 strength really comes to play where they could be using some of our silicon, combining it with some of their silicon designs. And given our advanced packaging strength, that gives us another way to be participating in those areas.
And of course, that reinforces some of the near-term opportunities will just be packaging, right, where they already have designed with one of the other foundry, but we’re going to be able to augment their capacity opportunities with immediately being able to engage with packaging opportunities and we’re seeing pipeline of those opportunities.
So overall, we agree that this is clearly going to be a market. We also see that some of the ones that you’ve seen most in the press are about particularly high-end training environments. But as we said, we see AI being infused in everything.
And there’s going to be AI chips for the edge, AI chips for the communications infrastructure, AI chips for sensing devices, for automotive devices, and we see opportunities for us, both as a product provider and as a foundry and technology provider across that spectrum, and that’s part of the unique positioning that IDM 2.0 gives us for the future..
Srini, do you have a follow-up question?.
Yes, it’s for Dave. Dave, it’s good to see the progress on the working capital front. I think previously, you said your expectation is that free cash flow would turn positive sometime in second half. Just curious if that’s still the expectation.
And also, on the gross margin front, is there any, I guess, PRQ charges that we should be aware of as we go into fourth quarter?.
Okay. So, let me just take a moment just to give the team credit on the second quarter in terms of working capital because we brought inventory down by $1 billion. Our days sales outstanding on the AR front is down to 24 days, which is exceptional. So, a lot of what you saw in terms of the improving free cash flow from Q1 to Q2 was working capital.
So, I think the team has done an outstanding job just really focusing on all the elements that drive free cash flow. Our expectation is still by the end of the year to get to breakeven free cash flow. There’s no reason why we shouldn’t achieve that. Obviously, the net CapEx might be a little different this year than we thought coming into the year.
But as we talked about, there’s just a focus on free cash flow, the improved outlook in terms of the business. We think we can get to breakeven by the end of the year.
As it relates to pre-PRQ reserves in the fourth quarter, we’re likely to have some, but it should be a pretty good quarter-over-quarter improvement from the third quarter, which was obviously a good quarter-over-quarter improvement from the second quarter..
Srini, thanks for the questions. Jonathan, I think we have time for one last caller, please..
Certainly. And our final question for today then comes from the line of Aaron Rakers from Wells Fargo..
And I do have a quick follow-up as well. Just kind of going back to the gross margin a little bit. I think, Dave, when you guided this quarter, you talked about just looking backwards, the PRQ impact was going to be about 250 basis points. I think there was also an underload impact that I think you guided to around 300 basis points.
So, I’m just curious what was -- what were those numbers in this most recent quarter relative to kind of as we try and frame what the expectation is going forward?.
Yes, they were largely as expected, although it was off of a lower revenue number. So, the absolute dollars were as expected. They had a little bit of less of an impact, given the revenue was higher. And both of those numbers, like I said, will be lower in the third quarter..
Aaron, do you have a quick follow-up?.
I do, just real quickly on just kind of the AI narrative. We talked about Gaudi a lot in the pipeline build-out.
I’m curious as you look forward, as part of that pipeline, Pat, do you expect to see deployment in some of the hyperscale cloud guys and competing against directly some of the large competitors on the GPU front with Gaudi in cloud?.
Simple answer. Yes. Right? And everyone is looking for alternatives. Clearly, the MLPerf numbers that we posted recently with Gaudi2 show very competitive numbers, significant TCO benefits for customers. They’re looking for alternatives. They’re also looking for more capacity. And so, we’re definitely engaged.
We already have Gaudi instances on AWS as available today already. And some of the names that we described in our earnings calls, Stability AI, Genesis Cloud. So, some of these are the proven, I’ll say, at scale Tier 1 cloud providers but some of the next-generation ones are also engaging. So overall, absolutely, we expect that to be the case.
We’re also on our own dev cloud, we’re making it easier for customers to test Gaudi more quickly. And with that, we now have 1,000 customers now who are taking advantage of the Intel Development Cloud. We’re building a 1,000-node Gaudi cluster so that they can be at scale with their testing a very large training environment.
So overall, the simple answer is, yes, very much so, and we’re seeing a good pipeline of those opportunities..
So with that, let me just wrap up our time together today. Thank you. We’re grateful that you would join us today, and we’re thankful that we have the opportunity to update you on our business. And simply put, it was a very good quarter. We exceeded expectations on top line, on bottom line.
We raised guidance and we look forward to the continue opportunities that we have of accelerating our business and seeing the margin improvement that comes in the second half of the year.
But even more important to me was the operational improvements that we saw, a good fiscal discipline, cost saving discipline and best of all, the progress that we’ve made, right, on our execution, our process execution, product execution, the transformational journey that we’re in.
And I just want to say a big thank you to my team for having a very good quarter that we could tell you about today. We look forward to talking to you more, particularly at our Innovation in September. We’ll be hosting an investor Q&A track and we hope to see many, if not all of you there. It will be a great time. Thank you..
Thank you, ladies and gentlemen, for your participation in today’s conference. This does conclude the program. You may now disconnect. Good day..