Ladies and gentlemen, thank you for standing by. At this time, all participants are in a listen-only mode. Later, we will conduct a question-and-answer session. [Operator Instructions] I would now like to hand the conference over to Dan O'Neil. Please go ahead, sir..
Good afternoon, and thank you all for joining us today for our Fiscal 2024 Fourth Quarter and Year Ending Earnings Call. I'm joined today by Bill Brennan, Credo's Chief Executive Officer; and Dan Fleming, Credo's Chief Financial Officer.
I'd like to remind everyone that certain comments made in this call today may include forward-looking statements regarding expected future financial results, strategies and plans, future operations, the markets in which we operate and other areas of discussion.
These forward-looking statements are subject to risks and uncertainties that are discussed in detail in our documents filed with the SEC.
It's not possible for the company's management to predict all risks, nor can the company assess the impact of all factors on its business or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement.
Given these risks, uncertainties and assumptions, the forward-looking events discussed during this call may not occur and actual results could differ materially and adversely from those anticipated or implied.
The company undertakes no obligation to publicly update forward-looking statements for any reason after the date of this call to conform these statements to actual results or to changes in the company's expectations, except as required by law.
Also, during this call, we will refer to certain non-GAAP financial measures which we consider to be important measures of the company's performance. These non-GAAP financial measures are provided in addition to, and not as a substitute for or superior to, financial performance prepared in accordance with US GAAP.
A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed using the Investor Relations portion of our website. With that, I'll now turn the call over to our CEO..
Thank you for joining our fourth quarter fiscal '24 earnings call. I'll start by reviewing our results and then I'll provide highlights of what we see for fiscal '25. Our CFO, Dan Fleming, will then follow with a detailed discussion of our Q4 and fiscal year '24 results and provide our outlook for Q1.
Credo is a pure play, high speed connectivity company, delivering a range of optimized and innovative connectivity solutions to meet the needs of global data center operators and service providers.
We leverage our core SerDes technology and unique customer focused design approach to deliver a differentiated suite of solutions including Active Electrical Cables or AECs, Optical DSPs, Line Card PHYs, SerDes chiplets and SerDes IP licenses for Ethernet port speeds ranging from 100 gig up to 1.6 terabits per second.
We target the most difficult connectivity challenges facing the market where a combination of architecture, system level approach, power and performance are most differentiated.
Credo is in a market environment of steadily increasing demand for optimized solutions with higher bandwidth and improved power efficiency driven by the accelerating connectivity requirements of leading edge AI deployments. I'm pleased to say that during both fiscal Q4 and fiscal '24, Credo achieved record revenue.
In Q4, we delivered revenue of $60.8 million and non-GAAP gross margin of 66.1%. In fiscal '24, Credo achieved revenue of $193 million and non-GAAP gross margin of 62.5%. The workloads supported by our solutions changed significantly during the fiscal year, and our growth was primarily driven by AI deployments across our entire portfolio.
In fiscal Q4, roughly three quarters of our revenue was driven by AI workloads. The year was also notable as we diversified our revenue across additional customers and products. I'm proud of the team for delivering solid results across a shifting landscape and also for executing a strong quarterly sequential ramp throughout the year.
In our AEC product line, we continued our market leadership and delivered our customers a range of solutions for port speeds ranging from 100 gig to 800 gig. Furthermore, our approach of delivering system level solutions with customized hardware and software features has enabled us to build close collaborative relationships with our customers.
Over many design cycles, across numerous customers, we have dramatically improved our speed-to-market in designing and qualifying our solutions, and this remains a key aspect of our competitive advantage. We believe this positions Credo with a unique value to the market that is difficult to replicate.
With this, our AECs have quickly become the leading solution for in-rack cabled connectivity for single lane speeds of 50 gigabits per second and above.
In addition to the advantages of AECs that include signal integrity, power, form factor and reliability, our customers have embraced the opportunity to innovate with Credo as a design partner to optimize system level features that make their AI clusters more efficient.
From a customer engagement perspective, fiscal '24 was fruitful as we saw the successful ramp at a new hyperscale customer, qualification at another and expansion of our AEC engagements with hyperscalers, Tier 2 data centers, and service providers.
AECs have quickly transitioned from a new concept to a de facto solution across many data center environments. Based on customer feedback and forecasts, we continue to expect an inflection point in our AEC revenue growth during the second half of fiscal '25. Fiscal '24 was also a strong year for our Optical DSP products.
During the year, we achieved material production revenue with significant wins at domestic and international hyperscalers. Additionally, we continue to gain traction with optical module partners and end customers due to our attractive combination of performance, power efficiency and system costs.
AI back end network deployments are a strong volume driver for the Optical Transceiver and AOC market, specifically for leading edge 100 gig per lane solutions.
As power efficiency has become a more critical factor, Credo has responded with innovative architectural solutions that drastically reduce DSP power while maintaining interoperability and signal integrity at the optical module level. We've made great progress with our Linear Receive Optics DSPs.
In November, we announced our LRO DSP solutions and by March at OFC, we demonstrated production designs with three 800 gig module partners. In the few months since OFC, we've seen continued market acceptance and design activity. These products enable a significant power reduction versus a traditional 800 gig solution.
The LRO architecture is the only way to achieve a sub 10 watt 800 gig module that meets existing industry optical standards and facilitates multi-vendor interoperability. We expect the benefits of LRO solutions to become even more impactful in next generation 1.6T optical modules.
I feel confident saying that the LPO architecture with no DSP has lost nearly all momentum in the market and that the LRO architecture is showing great promise. I'm encouraged by our customer traction in Q4.
We were pleased to kick-off multiple new Optical DSP design engagements with the leading optical module manufacturers, both with our new LRO DSP and our traditional full DSP solutions.
Given our results to date and our customer engagements, we are on track to achieve our optical DSP revenue goal of 10% of our fiscal '25 revenue and we are enthusiastic about future growth prospects in this category.
Regarding our Line Card PHY business, our leadership in this market continues as we transition to more advanced process nodes that deliver improved product performance and power efficiency.
During the year, we continued to add to our customer base and have multiple 100 gig per lane wins at industry leading Tier 1 OEMs and ODMs that serve the global data center market. These include 800 gig and 1.6T gearbox, Retimer and MACsec PHY products.
As we have discussed in the past, AI deployments are the driving force behind our growth for these leading edge devices. In the fourth quarter, we had success with both 50 gig and 100 gig per lane Line Card products.
While 50 gig per lane solutions will continue to have lengthy lifecycles, our 100 gig per lane solutions will also start adding to our revenue in fiscal '25. We expect the Line Card PHY business will continue to grow and contribute nicely to our overall business in fiscal '25 and beyond as we continue to invest and innovate in this market.
Lastly, I'll discuss our SerDes's IP licensing and chiplet businesses. In Q4, our SerDes licensing business delivered solid results owing to our combination of speed, signal integrity, leading power efficiency and breadth of offerings. During fiscal '24, we won licensing business across a range of applications, process nodes and lane rates.
Our wins range from 28 nanometer down to 4 nanometer and lane rates ranging from 28 gig to 112 gig. Our Chiplet business saw significant growth led by our largest customer who deploys our SerDes chiplets in a massive AI cluster.
This customer also engaged us to develop a next generation chiplet for future deployments, which is a testament to our leading technology and customer centric focus. We are entering fiscal '25 with a strong and diverse funnel of SerDes's licensing and chiplet opportunities.
In summary, the shift towards generative AI accelerated during our fiscal '24 and we see that continuing into the foreseeable future. Industry data and market forecasters point towards continued and growing demand for high bandwidth, energy efficient connectivity solutions that are application optimized.
Credo benefits from this demand due to our focus on innovative, low power, customer centric connectivity solutions for the most demanding applications.
Our view into fiscal '25 and beyond has remained consistent for a number of quarters now, and this has only been reinforced by recent wins, production ramps and customers forecasts as they continue to formalize their AI deployment plans. And with that, Dan Fleming, our CFO, will now provide additional details..
Thank you, Bill and good afternoon. I will first provide a financial summary of our fiscal year '24, then review our Q4 results and finally discuss our outlook for Q1 and provide some color on our expectations for fiscal year '25. Revenue for fiscal year '24 was a record at $193 million, up 5% year-over-year, driven by product revenue that grew by 8%.
Gross margin for the year was 62.5%, up 448 basis points year-over-year. Our operating margin declined by 208 basis points. As we continue to invest in R&D to support product development focused on numerous opportunities across our hyperscale customers. We reported earnings per share of $0.09 for the year, a $0.04 improvement over the prior year.
Moving on to the fourth quarter, in Q4, we reported record revenue of $60.8 million up 15% sequentially and up 89% year-over-year. Our IP business generated $16.6 million of revenue in Q4, up 193% year-over-year.
IP remains a strategic part of our business but as a reminder, our IP results may vary from quarter-to-quarter, driven largely by specific deliverables to pre-existing or new contracts.
While the mix of IP and product revenue will vary in any given quarter over time, our revenue mix in Q4 was 27% IP above our long term expectation for IP, which is 10% to 15% of revenue. Our product business generated $44.1 million of revenue in Q4, down 15% sequentially and up 67% year-over-year.
Our product business, excluding product engineering services, generated $40.8 million of revenue in Q4, up 2% sequentially. Our top four end customers were each greater than 10% of our revenue in Q4.
Our team delivered Q4 non GAAP gross margin of 66.1% above the high end of our guidance range and up 391 basis points sequentially, enabled by a strong IP contribution in the quarter. Our IP non-GAAP gross margin generally hovers near 100% and was 99.2% in Q4.
Our product non-GAAP gross margin was 53.7% in the quarter, down 783 basis points sequentially and up 392 basis points year-over-year. The sequential decline was due to a change in product engineering services revenue.
Total non-GAAP operating expenses in the fourth quarter were $32.7 million below the midpoint of our guidance range and up 7% sequentially.
Our OpEx increase was a result of a 17% year-over-year increase in R&D as we continue to invest in the resources to deliver innovative solutions for our hyperscale customers and a 26% year-over-year increase in SG&A as we continue to invest in public company infrastructure.
Our non-GAAP operating income was a record $7.5 million in Q4 compared to non-GAAP operating income of $2.4 million last quarter due to strong gross margin performance coupled with top line leverage.
Our non-GAAP operating margin was also a record at 12.3% in the quarter compared to a non-GAAP operating margin of 4.6% last quarter, a sequential increase of 771 basis points. We reported non-GAAP net income of $11.8 million in Q4, compared to non-GAAP net income of $6.3 million last quarter.
Cash flow from operations in the fourth quarter was $4.2 million. CapEx was $3.2 million in the quarter, driven by R&D, equipment spending and free cash flow was $1 million, an increase of $16.7 million year-over-year. We ended the quarter with cash and equivalents of $410.0 million, an increase of $0.9 million from the third quarter.
We remain well capitalized to continue investing in our growth opportunities while maintaining a substantial cash buffer. Our accounts receivable balance increased 33% sequentially to $59.7 million, while days sales outstanding increased to 89 days, up from 77 days in Q3. Our Q4 ending inventory was $25.9 million, down $5.6 million sequentially.
Now, turning to our guidance. We currently expect revenue in Q1 of fiscal '25 to be between $58 million and $61 million, down 2% sequentially at the midpoint. We expect Q1 non-GAAP gross margin to be within a range of 63% to 65%. We expect Q1 non-GAAP operating expenses to be between $35 million and $37 million.
The first quarter of fiscal '25 is a 14 week quarter, so included in this forecast is approximately $2 million in expenses for the extra week. We expect Q1 diluted weighted average share count to be approximately 180 million shares. We were pleased to see fiscal year '24 play out as expected.
The rapid shift to AI workloads drove new and broad based customer engagement, and we executed well to deliver the sequential growth we had forecast throughout the year. Our revenue mix transitioned swiftly through the year. In Q4, we estimate that AI revenue was approximately three quarters of total revenue, up dramatically from the prior year.
As we move forward through fiscal year '25, we expect sequential growth to accelerate in the second half of the year. From Q4 of fiscal '24 to Q4 of fiscal '25, we expect AI revenue to double year-over-year as programs across a number of customers reached production scale.
We expect fiscal year '25 non-GAAP gross margin to be within a range of 61% to 63% as product gross margins expand due to increasing scale. We expect fiscal year '25 non-GAAP operating expenses to grow at half the rate of top line growth. As a result, we look forward to driving operating leverage throughout the year.
And with that, I will open it up for questions..
Thank you. [Operator Instructions] Our first question comes from the line of Quinn Bolton from Needham. Your line is open..
Hey, guys, congratulations on the results and nice to see you quantifying the AI revenue. I guess Bill or Dan, wanted to start with just sort of a couple of housekeeping questions.
Can you give us sort of the percent of revenue from your largest four customers, and were they all across different product lines, or are you starting to see consolidation back to AECs among that top four customer base?.
Yeah, Quinn. Let me comment on that. This is Dan. Yeah. So, as I mentioned in our prepared remarks, we had four 10% end customers in Q4. They were our two AEC hyperscalers that we've discussed previously, plus a large consumer company and our lead chiplet customer. So by that list, you can kind of see a broad diversification of products represented.
And I'll add to that by saying when our K is filed in the next few weeks, you'll see that we had three 10% end customers for the full year, and I'll lay those out for you since you'll see them soon enough. Our largest customer was our first AEC hyperscale customer, which we've talked about over the last few years, which is Microsoft at 26%.
Then the second was an AEC, our second AEC hyperscaler. They came in at 20% for the year. And the third 10% customer was the lead Chiplet customer that we had at 15%.
So the key takeaway, though, this year was if you go back to FY '23 versus FY '24, and we've been saying this for the last few quarters, FY '24 was really the year in which revenue diversification materialized for us both from a customer perspective and a product perspective as well. Hopefully that gives you some additional color.
Quinn?.
Yeah. That's great, Dan. Thank you. And then I guess, Bill, I think if I got your prepared script, you talked about ramping a second AEC customer and then qualifying a third hyperscaler. Wondering if you could just give us a little bit of detail on the third hyperscaler.
Is it a sort of AI application? Is it NIC to TOR? Is it within a switch? Can you give us any sense of the per lane speed or total cable speed on that third engagement?.
Sure. So we've talked about in the past that the first program with this hyperscaler is a switch rack. It's 50 gig per lane design, 400 gig ports. And we've seen this relationship develop in a really similar way to the first two.
Start with a single program, and after the first experience with AECs, we're now engaged with additional programs on the roadmap. I mentioned, the first program is a switch rack, and now we're working on additional programs for AI appliance racks. And these are at 100 gig per lane.
And I'll mention that the plan that we're getting from them at this point is that we'll see this third customer ramp in the second half fiscal year timeframe. So that'll contribute to the inflection point that Dan has referenced..
Perfect. Thank you..
One moment for our next question. And our next question comes from the line of Tore Svanberg from Stifel. Your line is open..
Yes. Thanks and congratulations on the record revenue. I had a question on -- Dan, your comment about AI revenue for Q4 fiscal '25. So, based on my math it sounds like AI revenue would be about $90 million.
How should we think about the non-AI revenue over the next 12 months or in other words, that $16 million (ph) in non-AI revenue for Q4 '24? How will that progress over the next 12 months?.
Yeah. So based on our comments, we didn't provide specific revenue guidance for the full year, but we wanted to provide you a framework to understand a little bit more definitively how we've been framing our revenue growth throughout the year. And as you say, AI revenue, we expect to grow 100%, Q4-to-Q4 fiscal '24 to '25.
If you look at the non-AI revenue piece, what I would say is we can assume, or you can assume modest year-over-year growth. That's not what's driving our growth in the year. It's really these AI programs that are ramping largely in the second half of the year. So that's why we framed things as we did.
The other comment I'll add to that is our overall fiscal year '25 outlook has not changed. We're just kind of giving it a little bit more specificity. So we continue to expect that meaningful growth in the year. And that second half inflection point is, it will be fast upon us here shortly, driven by these AI programs..
Yeah. No, that's great color. And perhaps a question for you, Bill. So it looks like your PAM4 DSP business is finally starting to take off. You're targeting 10%, fiscal '25.
First of all, how much was that revenue in fiscal '24? And could you talk a little bit of the diversified customer base that you have for the PAM4 DSP business? You talked about growth in international and North America, but within North America we're talking about several hyperscalers driving that growth?.
Yeah. So first of all, for fiscal '24, we did not -- we hadn't had that 10% number as an objective in fiscal '24, but we came pretty close to it. And so we feel like things are lined up well for fiscal '25 and beyond. I would say there's multiple drivers, of course, we've got the first U.S. hyperscaler in production. We've got a second in qualification.
We're seeing a return in spending with non-U.S. hyperscalers. And we've commented that we're very well positioned for when that spending turns back on. And I'd say that these are the primary contributors in fiscal '25.
I will mention that we've got a lot of promising new engagements with optical module partners, and these new partners are really considered leaders in the industry. And of course that will -- that bridges two additional hyperscalers that are interested in looking at solutions with Credo.
I will also say that we've spent a lot of time talking about the LRO architecture in really the last six months. We see growing momentum with that LRO architecture for sure. And that's in addition to the full DSP momentum that we're building. So hopefully that gives you the color that you're looking for..
Thank you. One moment for our next question. Our next question will come from the line of Thomas O'Malley from Barclays. Your line is open..
Hey. What's going on, guys and thanks for taking my question. I wanted to follow-up on the AI guidance obviously. If you take the comment that three quarters of the revenue was related to AI in Q4 and extrapolate that to next year gets to pretty big numbers, but I wanted to just zoom in on this quarter.
So you had four 10% customers, one of which was a consumer customer who we know is non-AI related. So that would kind of imply that the rest of your business was entirely AI if you just do that math.
So could you just help me walk through, is it kind of just rounding three quarters AI revenue or how should I be thinking about, like, the dollar amount? Because it obviously sounds like that's growing quite nicely, but just want to understand the base.
If you're giving some color on what that should grow for the entire year?.
Yeah. We didn't give a precise number because it's hard for us to estimate in some cases how our end customers utilize our products. But we have a fair amount of certainty that, that three quarters or 75% is where we ended for Q4. So we kind of -- that's just kind of with a caveat.
So you would use that and it's really looking best framework is as we exited fiscal '24 versus how we expect to exit fiscal '25. So as you know these production ramps at these large customers can take time and they can pull in, they can push out a quarter or so. So that's why we're framing it kind of fiscal year end to fiscal year end..
Got you. And then, I just wanted to ask, I know you guys don't guide by product segment, but just a little color on the product in the IP side because it's worn pretty drastically over the last couple of quarters.
So with the July guidance, with the gross margins being a bit better than expected, you would just assume that maybe the IP side is kind of staying higher sequentially. Could you give us any color on the mix there into July? Is there any product growth or is most of the -- well obviously with the midpoint of revenue a bit down.
But is there any IP color that you can give us? Does it stay at these kind of elevated rates after the big fiscal Q4? Thank you..
Yeah. Sure. So just to reiterate what we had said for guidance for Q1, for gross margin, it was 63% to 65%. So really just kind of a modest sequential decline from Q4. So it's really driven by a few things. One is IP revenue will decline sequentially quarter-over-quarter. So that will happen.
However, if you look at NRE, that's kind of -- you should assume we're at historic averages, which we were in Q4. So kind of flat quarter-over-quarter. So it's really the product gross margin. There's a bit of a revenue mix dynamic there as well.
And a lot of this part of the theme of fiscal '25 will be increasing product margin exclusive of product engineering services due to increasing scale. As we kind of return to that road map where we really do drive operating margin and gross margin leverage as we increase in scale..
Thank you. One moment for our next question. Our next question will come from the line of Vijay Rakesh from Mizuho. Your line is open..
Yeah. Hi, Bill and Dan. Good quarter here. Just a quick question on the LRO side. You mentioned the 800 gig LRO, the sub 10-watt power consumption and your engagement with this. So, I'm just wondering how many CSP's you're working with on shipping that product? And how do you see those revenues ramping into '25, I guess, calendar '25..
So the work is primarily being done right now with optical module manufacturers. We've got more than a handful that are working on designs now. We have delivered first samples to the first hyperscale potential customer and we see that continuing throughout this quarter.
So as far as fiscal '25 goes, there is a possibility we don't have much really forecasted in what we're looking at yet, but there's a possibility that will ramp first significant revenue in this fiscal year. But it's not something that's really built in..
Got it. So that should be incremental. And on the Chiplet customer, your Chiplet customer is obviously increasing CapEx quite a bit. And do you see your traction growing proportionately? Like, is that starting to pick up as well? Thanks..
Yeah. So the first customer that we've got in production, the one we've talked about, we see that business really ongoing. Now, if they have a big increase in the spend on the cluster that's designed in house, we'll definitely see a participation with that. But they've got multiple different paths that they're pursuing right now.
But generally speaking, we continue to be bullish on Chiplets in general. We've got additional customers that will be coming online in the future. Again, not much built in fiscal '25, but we're bullish on the segment..
Thank you. One moment for our next question. Our next question will come from the line of Vivek Arya from Bank of America. Your line is open..
Hi. Thank you for taking our question. This is Taksan [ph] on behalf of Vivek. I just want to go back to the AEC product line. Obviously you're ramping your second customer, third customer also in qualification in the second half. How are you seeing the competitive dynamic? Just because Marvell and Astera are also launching their products here..
At a high level, we have not seen a significant change in the competitive environment. So our objective has always been to be first to deliver and first to qualify. And I think we've done a really good job on this objective with all of our customers. I would say that, one big advantage that we have competitively is the way we're organized.
We have more than 100 people on our team that are dedicated to the AEC business. And that includes hardware and software development. That includes qualification, production, fulfillment and support. And this really drives success with this objective, to be able to deliver first and qualify first.
So I would say that as we go deeper with each customer relationship, we really see an increasing number of requests for customized hardware and software. And I think from the standpoint of the number of SKUs that we're working on today, the number is more than 20 that are in active development from a different SKU perspective.
So competitively, I would say we're unique in a sense that we're the single point of contact and we take full responsibility for all aspects of the relationships with our customers. And this really drives their satisfaction.
When I talk more specifically about competition, we're really competing with groups of companies that need to do the same work that we're doing, but there's really no shortcut to it.
And when you've got the complexity of having multiple suppliers involved and responsible for different aspects of one solution, it's really far greater complexity than having one party like Credo being ultimately responsible. And so I guess with that said, the market's growing quickly and we do expect to see second sourcing in the future.
This is natural, and ultimately, our goal is to always be raising the competitive bar and ultimately serving our customers very well and driving their satisfaction. But I don't have specific feedback regarding the two potential competitors that you mentioned..
Of course. And then as a follow-up, just given, NVIDIA is also entering this Ethernet switch market, and that could potentially have some implications on AEC as a standard for connectivity. So I was wondering if you have any color there, or if you've done any interoperability testing with the NVIDIA solutions as well? Thank you..
Sure. We've been really clear when we talk about the US hyperscalers, there is a desire to move to Ethernet long term. And so I think it comes as no surprise that we've seen a lot of discussion around NVIDIA and Ethernet. We view this as a positive for us and our business.
We've done testing with everybody that's out there from the standpoint of NICs and switches. And so we feel really quite confident that there will be an opportunity for our AECs for inter-rack connectivity. And again, we don't view this as really a surprise that the U.S. hyperscalers are driving in this direction..
Thank you. One moment for our next question. Our next question will come from the line of Richard Shannon from Craig-Hallum. Your line is open..
Hi, guys. Thanks for taking my question. I apologize if this has been touched on before.
I got on the call late here, but Bill, just following up on your comments regarding custom cables and the increasing requests there, maybe you can characterize your business now and kind of what you expected going forward here in terms of [indiscernible] profile of custom versus more commodity or standard.
Is there much of any of that going on now, or do you expect that to be a material contributor soon?.
Well, I think that what we've seen is that every time we engage deeply with a customer and we open the door for innovation, basically we're open to special requests from a hardware standpoint, from a firmware or software standpoint. And what we're seeing is that customers really respond positively.
So we've organized to be able to receive these requests and really deliver on them. So I think that more and more, as we look into the future, I think that, that a very large percentage of what we ship will be customer specific.
There will be a market, a smaller market, say for standard connectivity solutions, like an 800 gig to 800 gig AEC with just two connectors and really nothing special, but we see the large majority of the volume being somewhat customer specific..
Okay. Thanks for that clarification, Bill.
My second question is on -- following up on your comments here about AI revenues doubling from this past fourth quarter to the next fourth quarter here, maybe you can characterize the degree to which back end network revenues are built into this at all, versus front end and kind of the NIC-to-TOR and other applications you've been at historically..
Yeah. I would say that you're right on from the standpoint that the backend networks are really driving the increase in revenue. And that's a general statement about AI. Of course, AI networks are also connected to the front end network, but the number of connections is small in comparison.
So I'll say that we're seeing the continued increase in the density of connections in AI clusters, and it's really driven by the combination of increased GPU performance generally, as those in that market are executing on the road map. But there's also a desire to increase the GPU utilization. Some out there like OpenAI.
They published a document that said that the average utilization of a GPU is roughly 33%. And so there's a big opportunity in going with more parallelism. And really that drives a larger density or increased density in the number of connections, really specifically to the backend, scale up networks. So we talked about scale out and scale up.
What that means from the standpoint of how many connections, how many AEC connections are possible per GPU. Some of the backend clusters that we're ramping in the second half will have two or four AECs per GPU, and we're working on next generation platforms that will actually increase that number of connections to eight or even higher per GPU.
And so I think if you take it to a rack level, say an AI appliance rack level, we're seeing a density today of between 56 and 64 AECs per rack, and we expect this number to likely reach close to 200 AEC’s per rack in the future. This is something that will fuel the growth as well..
Thank you. One moment for our next question. And our next question will come from the line of Karl Ackerman from BNP Paribas. Your is open..
Yeah. Thank you, gentlemen. I have two.
I suppose, for the first question, Dan, could you put a finer point on IP licensing revenue in the July quarter? Like, is it cut in half? And do you see IP revenue remaining toward the upper end of your long term, 10% to 15% range for fiscal '25?.
Yeah. So for fiscal '25, we internally expect it to be near the high end of that long term model, which again is 10% to 15% of overall revenue. And so from a -- if I were in your shoes to model this, I would assume that it's kind of near a quarter of that annual amount in Q1.
And if you do that, you should kind of solve to within our gross margin range for Q1. I hope that's helpful..
I see. Thank you for that. Perhaps a question for you, Bill. There has been much discussion and confusion about where half-retimed DSPs can be used in the network. For example, the use of active copper cable are being used for inter-rack connectivity, while AOCs and AECs primary use case for connecting NICs to TOR and or middle of row switches.
Do you -- my question is, do all AI networks require a full retimed DSP for either AOC or AEC connections? Thank you..
So this -- this is a much discussed topic in the industry. And I think a year ago, when -- at OFC there was a big discussion about the idea of eliminating the DSP, that really started a lot of effort in pursuing the solution. So there's many companies, many optical module companies that pursued designs with no DSP.
And I think generally, the jury has come in, and basically, there's really no momentum in the market now for solutions with no DSP. So in the optical world that we see right now, the solution for LRO is really quite feasible. And we're showing that with multiple partners, we demonstrated three at OFC, Lumentum being the largest of the three partners.
And what we're seeing is that the solution successfully reduces power. And by the way, that was the entire objective of LPO, was to reduce power of these connections.
And so we've shown that we can deliver 800 gig modules with partners that go sub 10-watts, which is really probably a 30% to 40% reduction compared to what's typical in the market for fully retimed solutions. And so the key with LRO is that we're able to maintain industry standards as well as interoperability.
And so you can literally use an LRO solution, and there's nothing special that you need to do. And so we see that, especially for clusters, these are shorter connections, and AOCs are likely also transceivers. But for these shorter connections that, say, 10 to 20 meters.
And especially in the cluster, power is so critical that we see that that entire market could be addressed by LRO solutions. Now, it's obviously going to be up to a given customer and their strategy, but the idea that there will be a large volume of solutions with no DSP, I think that really no longer exists from the customers that we're talking to..
Thank you. One moment for our next question. Our next question will come from the line of Suji Desilva from ROTH. Your line is open..
Hi, Bill. Hi, Dan. Congrats on the progress here. Just, Bill, on your comments on the number of AECs per GPU increasing.
I'm just curious, in general, is that increasing the kind of the forecasting needs of your customers in the AI area versus traditional cloud versus three months ago, or is the AI line stable already anticipated? And is the traditional cloud part coming back?.
Yeah. I would say there's really no change in the programs that we talked about three months ago. I'd say the additional information that we're sharing today is that, this need for more performance and more bandwidth is really something that we're seeing as we look at next generation AI cluster designs.
And so that's a bit of new information, that the number of connections per GPU is doubling or even more than doubling, and that'll obviously drive growth. Now, as it relates to -- we talk about front end networks and back end networks, and of course AI clusters, they're all connected to the front end network.
As that relates to, say, general compute versus AI, hard for us to see from a forecasting standpoint how that breaks out, because the same AECs are used for both from a front end network perspective. But I would say generally that the momentum in the market for AI, there's no question it's still a huge amount of momentum.
And we see that really for the foreseeable future. If I talk about what's the trade-off for us, if there's a real return from a general compute market share perspective, of course we'll benefit from that. We're really used in both. When we talk about the third swim lane, which is, we have AI appliances, general compute server racks.
So these are both server racks. We talk about the third swim lane being switch racks and we see that growing in popularity as well, especially as the market moves towards 100 gig per lane speeds..
Got it. That's very helpful color Bill, thanks. And then staying on these increase in AECs per GPU.
Does that introduce a customization opportunity as well? I'm thinking kind of the old wire rack opportunity, things like that? Or are those more standard cables, but just more of them?.
Yeah. I'd say none of these are standard. So in the AI appliance application, what we're seeing is that there's maybe little or zero standard products that are being designed right now. So all of them have special features. I will say that, that we're delivering cables with two connectors, three connectors, four connectors, and even five connectors.
And so it's, there's -- when you give these really creative designers of the racks, just the entire A-appliance rack. It's fun to see what they're going to come up with, and we're very much open to making their rack design more efficient..
Thank you. One moment for our next question. And we have a follow-up from the line of Quinn Bolton from Needham. Your line is open..
Hey, Bill. Wondering if you could just sort of address the AEC versus ACC debate that seems to have kind of popped up after OFC, as NVIDIA is looking to use ACCs in its NV Link fabric.
Do you see perhaps as a result of that growing adoption of ACCs, or do you think ACCs are going to be really use case limited going forward?.
Use case limited. We don't see ACCs anywhere in the market other than what you described at NVIDIA..
Perfect. Very simple. Thank you..
Thank you. One moment for our last question. And our next question will come from line of Tore Svanberg from Stifel. Your line is open..
Yes. It’s Svanberg, just two follow-ups. So, first of all, Bill, in your prepared remarks, you talked about accelerating the speed to market pretty meaningfully.
Is that a result of your engagements with customers, or have you implemented internally any new technologies or anything like that to really get the product to market quicker?.
I think it's really due to a number of things in the way that we've organized and also the way that we work with our customers. I think from the standpoint of our ability to collaborate, it's really on a different level. We've got weekly, if not daily, interaction between our engineering teams at Credo and our customer.
And so that relates directly to our ability to deliver first samples. And then when we talk about moving something into production, there's many different levels of qualification, and we've taken complete ownership of that.
And when we think about that, what does that mean? That means that we've got more than 10 thermal chambers that are in constant use. And what are we doing there. So our customers ship us switches or appliances with the configuration that they want to be qualified that they're planning on taking to production. We run the qualification test for them.
So it's live traffic, varying temperature, varying voltage, and we're doing a lot of the work for them upfront. And so when they go into a final qualification mode, they know that what we're delivering is highly predictable because we've already delivered data based on their prescriptive tests that they give us with the equipment that they ship us.
And so from the standpoint of delivering first, it's about being organized to respond quickly. Qualifying first, we're doing a lot of the work for our customers, and that's really taking it up a notch..
Great. And just my last question is a clarification. I just want to make sure, I mean, I think I got this right, but I just want to make sure that it is clear to everybody. So AI revenue, three quarters that includes product and licensing revenue. And that is the number that you expect to double year-over-year Q4 fiscal '25..
That is correct..
Great. Thank you..
Thank you. And there are no further questions at this time. Mr. Brennan, I’ll turn the call back over to you..
So thanks to everybody for the questions. We really appreciate the participation and we look forward to the continued conversation on the callbacks. Thank you..
And this concludes today's conference call. You may now disconnect. Everyone, have a great day..